Feb 18 16:29:38 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 16:29:38 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:38 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 16:29:39 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 16:29:40 crc kubenswrapper[4812]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.218545 4812 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225428 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225460 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225473 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225485 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225495 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225507 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225517 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225526 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225535 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225545 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225554 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225563 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225570 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225579 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225586 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225594 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225617 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225625 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225636 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225643 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225651 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225659 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225667 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225674 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225682 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225690 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225697 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225705 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225712 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225720 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225727 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225735 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225742 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225750 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225757 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225765 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225773 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225781 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225789 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225797 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225804 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225814 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225824 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225832 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225840 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225849 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225857 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225866 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225873 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225882 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225890 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225897 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225910 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225918 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225926 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225933 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225941 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225948 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225956 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225964 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225971 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225980 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225987 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.225995 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226003 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226013 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226022 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226034 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226043 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226051 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.226062 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227208 4812 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227241 4812 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227303 4812 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227320 4812 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227332 4812 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227342 4812 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227354 4812 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227366 4812 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227375 4812 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227385 4812 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227394 4812 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227404 4812 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227413 4812 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227422 4812 flags.go:64] FLAG: --cgroup-root="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227431 4812 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227440 4812 flags.go:64] FLAG: --client-ca-file="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227449 4812 flags.go:64] FLAG: --cloud-config="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227458 4812 flags.go:64] FLAG: --cloud-provider="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227467 4812 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227484 4812 flags.go:64] FLAG: --cluster-domain="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227493 4812 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227503 4812 flags.go:64] FLAG: --config-dir="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227523 4812 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227534 4812 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227545 4812 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227555 4812 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227564 4812 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227574 4812 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227583 4812 flags.go:64] FLAG: --contention-profiling="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227593 4812 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227602 4812 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227612 4812 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227621 4812 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227633 4812 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227642 4812 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227651 4812 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227659 4812 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227668 4812 flags.go:64] FLAG: --enable-server="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227677 4812 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227697 4812 flags.go:64] FLAG: --event-burst="100" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227706 4812 flags.go:64] FLAG: --event-qps="50" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227715 4812 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227724 4812 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227733 4812 flags.go:64] FLAG: --eviction-hard="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227745 4812 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227754 4812 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227762 4812 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227772 4812 flags.go:64] FLAG: --eviction-soft="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227780 4812 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227789 4812 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227798 4812 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227807 4812 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227816 4812 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227825 4812 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227833 4812 flags.go:64] FLAG: --feature-gates="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227845 4812 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227854 4812 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227864 4812 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227885 4812 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227895 4812 flags.go:64] FLAG: --healthz-port="10248" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227905 4812 flags.go:64] FLAG: --help="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227914 4812 flags.go:64] FLAG: --hostname-override="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227922 4812 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227933 4812 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227949 4812 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227960 4812 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227970 4812 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227979 4812 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227988 4812 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.227996 4812 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228006 4812 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228014 4812 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228024 4812 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228033 4812 flags.go:64] FLAG: --kube-reserved="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228042 4812 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228051 4812 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228061 4812 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228070 4812 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228078 4812 flags.go:64] FLAG: --lock-file="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228087 4812 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228138 4812 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228152 4812 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228169 4812 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228181 4812 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228192 4812 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228203 4812 flags.go:64] FLAG: --logging-format="text" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228214 4812 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228227 4812 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228238 4812 flags.go:64] FLAG: --manifest-url="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228249 4812 flags.go:64] FLAG: --manifest-url-header="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228264 4812 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228276 4812 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228288 4812 flags.go:64] FLAG: --max-pods="110" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228297 4812 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228322 4812 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228335 4812 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228345 4812 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228355 4812 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228363 4812 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228373 4812 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228406 4812 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228415 4812 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228425 4812 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228434 4812 flags.go:64] FLAG: --pod-cidr="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228443 4812 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228456 4812 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228465 4812 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228474 4812 flags.go:64] FLAG: --pods-per-core="0" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228483 4812 flags.go:64] FLAG: --port="10250" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228492 4812 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228501 4812 flags.go:64] FLAG: --provider-id="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228510 4812 flags.go:64] FLAG: --qos-reserved="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228526 4812 flags.go:64] FLAG: --read-only-port="10255" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228535 4812 flags.go:64] FLAG: --register-node="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228543 4812 flags.go:64] FLAG: --register-schedulable="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228552 4812 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228567 4812 flags.go:64] FLAG: --registry-burst="10" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228576 4812 flags.go:64] FLAG: --registry-qps="5" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228585 4812 flags.go:64] FLAG: --reserved-cpus="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228593 4812 flags.go:64] FLAG: --reserved-memory="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228604 4812 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228613 4812 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228622 4812 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228631 4812 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228640 4812 flags.go:64] FLAG: --runonce="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228649 4812 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228658 4812 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228667 4812 flags.go:64] FLAG: --seccomp-default="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228677 4812 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228685 4812 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228723 4812 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228733 4812 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228742 4812 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228751 4812 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228759 4812 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228768 4812 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228777 4812 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228786 4812 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228796 4812 flags.go:64] FLAG: --system-cgroups="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228804 4812 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228818 4812 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228827 4812 flags.go:64] FLAG: --tls-cert-file="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228835 4812 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228852 4812 flags.go:64] FLAG: --tls-min-version="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228864 4812 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228873 4812 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228882 4812 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228891 4812 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228911 4812 flags.go:64] FLAG: --v="2" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228923 4812 flags.go:64] FLAG: --version="false" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228934 4812 flags.go:64] FLAG: --vmodule="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228945 4812 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.228954 4812 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229264 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229280 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229289 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229299 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229309 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229318 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229326 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229336 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229344 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229352 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229361 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229369 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229377 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.229482 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.231850 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232273 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232294 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232304 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232314 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232324 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232336 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232346 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232355 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232363 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232374 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232383 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232391 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232400 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232408 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232419 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232428 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232437 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232446 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232455 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232464 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232474 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232485 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232493 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232502 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232510 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232519 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232528 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232536 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232550 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232564 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232575 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232584 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232596 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232606 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232616 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232630 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232642 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232654 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232664 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232673 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232682 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232691 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232702 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232713 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232723 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232731 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232741 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232750 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232760 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232769 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232778 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232789 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232798 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232806 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232816 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.232825 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.232842 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.246394 4812 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.246441 4812 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246575 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246589 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246599 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246611 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246621 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246632 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246642 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246651 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246659 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246671 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246682 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246692 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246729 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246741 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246750 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246762 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246771 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246780 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246789 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246799 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246809 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246817 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246826 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246834 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246843 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246852 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246861 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246871 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246880 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246889 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246899 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246908 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246917 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246927 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246936 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246945 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246954 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246962 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246970 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246979 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246987 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.246996 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247004 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247013 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247022 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247030 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247039 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247047 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247058 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247068 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247076 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247085 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247124 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247137 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247147 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247157 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247166 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247177 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247185 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247194 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247202 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247210 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247219 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247227 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247235 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247244 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247253 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247262 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247270 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247281 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247292 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.247306 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247555 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247569 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247578 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247587 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247596 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247606 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247616 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247625 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247634 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247643 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247654 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247663 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247672 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247684 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247696 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247708 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247718 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247729 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247739 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247749 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247759 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247768 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247778 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247787 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247796 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247807 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247818 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247827 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247836 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247845 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247854 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247863 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247874 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247886 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247896 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247906 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247916 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247925 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247934 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247943 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247952 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247962 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247971 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247979 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247988 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.247997 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248007 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248017 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248026 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248035 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248044 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248052 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248062 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248071 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248079 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248088 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248127 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248140 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248153 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248164 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248175 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248184 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248194 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248203 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248211 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248220 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248229 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248238 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248247 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248256 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.248264 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.248277 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.248560 4812 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.254879 4812 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.255027 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.257445 4812 server.go:997] "Starting client certificate rotation" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.257503 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.257727 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-15 04:50:56.913030511 +0000 UTC Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.257818 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.288248 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.290216 4812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.291778 4812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.315201 4812 log.go:25] "Validated CRI v1 runtime API" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.362337 4812 log.go:25] "Validated CRI v1 image API" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.364803 4812 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.370892 4812 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-16-24-37-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.370940 4812 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.405038 4812 manager.go:217] Machine: {Timestamp:2026-02-18 16:29:40.400322484 +0000 UTC m=+0.665933473 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:98e69d53-b6df-43fa-8be4-eb3c6f91bf68 BootID:64817a4e-e396-49fc-8ea4-fa691a9f8933 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a4:b6:6d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a4:b6:6d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:28:f0:3b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:20:1d:ac Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:25:c5:a5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e4:50:9e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:35:b1:5d:43:af Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4a:25:93:40:c7:0f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.405736 4812 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.406157 4812 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.408591 4812 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.409001 4812 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.409073 4812 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.411333 4812 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.411383 4812 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.411877 4812 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.411917 4812 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.412362 4812 state_mem.go:36] "Initialized new in-memory state store" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.412600 4812 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.416421 4812 kubelet.go:418] "Attempting to sync node with API server" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.416461 4812 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.416495 4812 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.416559 4812 kubelet.go:324] "Adding apiserver pod source" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.416586 4812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.421500 4812 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.422150 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.422221 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.422271 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.422303 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.422793 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.426196 4812 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428199 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428275 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428302 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428324 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428363 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428389 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428411 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428445 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428549 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428578 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428638 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.428660 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.430188 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.431315 4812 server.go:1280] "Started kubelet" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.432906 4812 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.434146 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.432937 4812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 16:29:40 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.437185 4812 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.437728 4812 server.go:460] "Adding debug handlers to kubelet server" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.438300 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895642ec24c9d93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 16:29:40.431232403 +0000 UTC m=+0.696843352,LastTimestamp:2026-02-18 16:29:40.431232403 +0000 UTC m=+0.696843352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.441032 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.441285 4812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.441409 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:10:52.400024136 +0000 UTC Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.441860 4812 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.441930 4812 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.442031 4812 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.442053 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.446450 4812 factory.go:55] Registering systemd factory Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.446515 4812 factory.go:221] Registration of the systemd container factory successfully Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.447288 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.447417 4812 factory.go:153] Registering CRI-O factory Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.447491 4812 factory.go:221] Registration of the crio container factory successfully Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.447694 4812 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.447894 4812 factory.go:103] Registering Raw factory Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.447932 4812 manager.go:1196] Started watching for new ooms in manager Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.449035 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.449277 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.449554 4812 manager.go:319] Starting recovery of all containers Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472457 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472551 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472579 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472605 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472629 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472649 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472671 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472695 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472729 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472771 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472812 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472842 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472872 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472919 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.472949 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475528 4812 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475617 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475656 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475688 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475716 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475740 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475769 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475831 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475853 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475895 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475922 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.475949 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476048 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476078 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476131 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476153 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476175 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476209 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476262 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476285 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476365 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476387 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476411 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476432 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476455 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476475 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476496 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476517 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476552 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476579 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476603 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476627 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476668 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476698 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476724 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476748 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476771 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476796 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476835 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476860 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476904 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476929 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476970 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.476995 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477018 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477165 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477232 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477258 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477291 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477322 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477361 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477389 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477411 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477432 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477455 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477477 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477499 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477523 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477546 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477567 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477589 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477660 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477712 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477733 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477755 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477776 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477796 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477822 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477842 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477864 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477893 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477915 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477935 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477955 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477978 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.477999 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478018 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478042 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478064 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478086 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478141 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478163 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478184 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478206 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478226 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478247 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478271 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478303 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478325 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478345 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478429 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478457 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478482 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478511 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478535 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478558 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478579 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478601 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478649 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478676 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478696 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478717 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478740 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478761 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478781 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478806 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478827 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478873 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478896 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478917 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478938 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478959 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.478979 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479000 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479022 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479048 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479068 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479088 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479135 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479157 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479178 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479199 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479220 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479242 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479264 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479284 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479306 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479326 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479347 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479367 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479389 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479412 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479432 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479452 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479473 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479492 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479513 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479535 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479556 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479576 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479595 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479617 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479638 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479659 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479679 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479701 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479723 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479745 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479766 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479787 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479810 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479830 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479851 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479871 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479891 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479911 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479932 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479956 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.479980 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480000 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480043 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480065 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480087 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480142 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480211 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480233 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480254 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480276 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480296 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480345 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480370 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480398 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480418 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480441 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480469 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480493 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480513 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480534 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480555 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480583 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480603 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480624 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480645 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480666 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480686 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480708 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480729 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480794 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480814 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480858 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480879 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480902 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480932 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480953 4812 reconstruct.go:97] "Volume reconstruction finished" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.480966 4812 reconciler.go:26] "Reconciler: start to sync state" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.492678 4812 manager.go:324] Recovery completed Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.503940 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.504029 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506680 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506740 4812 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.506779 4812 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.506899 4812 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.507688 4812 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.507711 4812 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.507735 4812 state_mem.go:36] "Initialized new in-memory state store" Feb 18 16:29:40 crc kubenswrapper[4812]: W0218 16:29:40.507715 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.507823 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.526351 4812 policy_none.go:49] "None policy: Start" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.527580 4812 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.527657 4812 state_mem.go:35] "Initializing new in-memory state store" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.542973 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.582266 4812 manager.go:334] "Starting Device Plugin manager" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.582602 4812 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.582616 4812 server.go:79] "Starting device plugin registration server" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.583126 4812 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.583139 4812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.584594 4812 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.584716 4812 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.584735 4812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.597921 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.607336 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.607444 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.608729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.608774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.608787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.608924 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.609431 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.609517 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610323 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610578 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.610677 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611701 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611893 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.611969 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.612766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.612852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.612875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613172 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613241 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.613439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614818 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.614851 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.615581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.615626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.615643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.649141 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.683615 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.683736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.683783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.683810 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.683865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684009 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684124 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684149 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.684595 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.685114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.685154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.685168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.685201 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.685775 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785797 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785919 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785943 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.785979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786066 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786161 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786412 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786382 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786548 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786529 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786560 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786574 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786617 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786622 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.786669 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.886577 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.888350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.888424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.888451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.888499 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:40 crc kubenswrapper[4812]: E0218 16:29:40.889282 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.950392 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.961387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 16:29:40 crc kubenswrapper[4812]: I0218 16:29:40.968284 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.004807 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.005635 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9bb5c65619db58393c3efafe144b631476998f5cb269dd3c1c80cb05681ee559 WatchSource:0}: Error finding container 9bb5c65619db58393c3efafe144b631476998f5cb269dd3c1c80cb05681ee559: Status 404 returned error can't find the container with id 9bb5c65619db58393c3efafe144b631476998f5cb269dd3c1c80cb05681ee559 Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.007393 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fbdcdc3c49fb2a158b5f5fd9b2f64a3b5735efd0678b20fc917a5b3ad893932f WatchSource:0}: Error finding container fbdcdc3c49fb2a158b5f5fd9b2f64a3b5735efd0678b20fc917a5b3ad893932f: Status 404 returned error can't find the container with id fbdcdc3c49fb2a158b5f5fd9b2f64a3b5735efd0678b20fc917a5b3ad893932f Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.012412 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.014826 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-eed65c538d4d9b39d9b3b8676f79dfd4bd31d9f06c394cd55c53d27d13efeb10 WatchSource:0}: Error finding container eed65c538d4d9b39d9b3b8676f79dfd4bd31d9f06c394cd55c53d27d13efeb10: Status 404 returned error can't find the container with id eed65c538d4d9b39d9b3b8676f79dfd4bd31d9f06c394cd55c53d27d13efeb10 Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.020554 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f3467c7eaa0b78296700d8e65e6c4e15505729c2a6dba6b364350d432de2067f WatchSource:0}: Error finding container f3467c7eaa0b78296700d8e65e6c4e15505729c2a6dba6b364350d432de2067f: Status 404 returned error can't find the container with id f3467c7eaa0b78296700d8e65e6c4e15505729c2a6dba6b364350d432de2067f Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.028652 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-41173093c930872c7a9679979703bf7eeaea7480268e951aa6ea4810e9bdde90 WatchSource:0}: Error finding container 41173093c930872c7a9679979703bf7eeaea7480268e951aa6ea4810e9bdde90: Status 404 returned error can't find the container with id 41173093c930872c7a9679979703bf7eeaea7480268e951aa6ea4810e9bdde90 Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.050221 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.290118 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.292940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.293015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.293043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.293125 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.293830 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.307302 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.307387 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.416923 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.417020 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.435628 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.441676 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:28:19.279091998 +0000 UTC Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.515956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41173093c930872c7a9679979703bf7eeaea7480268e951aa6ea4810e9bdde90"} Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.517532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3467c7eaa0b78296700d8e65e6c4e15505729c2a6dba6b364350d432de2067f"} Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.520038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eed65c538d4d9b39d9b3b8676f79dfd4bd31d9f06c394cd55c53d27d13efeb10"} Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.521262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fbdcdc3c49fb2a158b5f5fd9b2f64a3b5735efd0678b20fc917a5b3ad893932f"} Feb 18 16:29:41 crc kubenswrapper[4812]: I0218 16:29:41.522386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9bb5c65619db58393c3efafe144b631476998f5cb269dd3c1c80cb05681ee559"} Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.660042 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.660246 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:41 crc kubenswrapper[4812]: W0218 16:29:41.772691 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.773253 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:41 crc kubenswrapper[4812]: E0218 16:29:41.851944 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.094525 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.096757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.096803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.096816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.096847 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:42 crc kubenswrapper[4812]: E0218 16:29:42.097339 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.435963 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.442014 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:09:33.089211395 +0000 UTC Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.481166 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 16:29:42 crc kubenswrapper[4812]: E0218 16:29:42.483313 4812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.528645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.528720 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.528735 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.530879 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7" exitCode=0 Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.530975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.531139 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.533315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.533410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.533437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.534090 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f64689a106dd4bf0329eef103162c9c04e2d1bd8f46649a663d5d2de70a14a64" exitCode=0 Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.534231 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f64689a106dd4bf0329eef103162c9c04e2d1bd8f46649a663d5d2de70a14a64"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.534338 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.535620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.535665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.535686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.536129 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537215 4812 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b" exitCode=0 Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537360 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.537981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.538575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.538601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.538613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.541818 4812 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2" exitCode=0 Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.541860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2"} Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.541996 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.543181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.543231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:42 crc kubenswrapper[4812]: I0218 16:29:42.543250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: W0218 16:29:43.159690 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:43 crc kubenswrapper[4812]: E0218 16:29:43.160930 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.434746 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.443025 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:37:23.845860408 +0000 UTC Feb 18 16:29:43 crc kubenswrapper[4812]: E0218 16:29:43.453154 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Feb 18 16:29:43 crc kubenswrapper[4812]: W0218 16:29:43.459660 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:43 crc kubenswrapper[4812]: E0218 16:29:43.459731 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.549195 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.549244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.549258 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.549268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.551390 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da5d73d73a497c6d17eadcb2ec6aeb8935f4b72bab266d79a408198cfb219281" exitCode=0 Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.551473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da5d73d73a497c6d17eadcb2ec6aeb8935f4b72bab266d79a408198cfb219281"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.551528 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.554192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.554225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.554236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.556264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.556362 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.557137 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.557166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.557175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.561262 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.561302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.561396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.561420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.563113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.563153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.563163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.565749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a"} Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.565831 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.566528 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.566565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.566578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.697972 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.704632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.704690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.704703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:43 crc kubenswrapper[4812]: I0218 16:29:43.704735 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:43 crc kubenswrapper[4812]: E0218 16:29:43.705237 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 18 16:29:44 crc kubenswrapper[4812]: W0218 16:29:44.299731 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:44 crc kubenswrapper[4812]: E0218 16:29:44.299847 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.435528 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.443833 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:00:29.965303835 +0000 UTC Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.523606 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.572448 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.575533 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18" exitCode=255 Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.575656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18"} Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.575665 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.576777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.576818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.576867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.577640 4812 scope.go:117] "RemoveContainer" containerID="bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.578646 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0b98b9a246a3501eaa4158f46664849b8d3786648fd1f0e439412e48d066b763" exitCode=0 Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.578749 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.578786 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.579399 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0b98b9a246a3501eaa4158f46664849b8d3786648fd1f0e439412e48d066b763"} Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.579495 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.579628 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.579839 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.581458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.582130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.627157 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:44 crc kubenswrapper[4812]: I0218 16:29:44.666616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.443941 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:40:55.990792483 +0000 UTC Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.586217 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.588781 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576"} Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.588876 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.590487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.590549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.590570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.594301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f8dbadc356e757cea705a36b23351960c2cd347c7b99b661ea08ec3ca74e63b5"} Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.594359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ade8b9b56cd277055628cef4ca5cb923826054b4471ac50899f74862513eca5"} Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.594390 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69a849e077078d324869bfd03218394c3aab1a7fcb640544e0fec93b7c910fb4"} Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.594418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9cc0b9075ca9bc82c6b6716495569754a8a80bf118dffa5fae5e18a2c1c4e87"} Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.594426 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.596609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.596674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:45 crc kubenswrapper[4812]: I0218 16:29:45.596698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.444624 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:18:01.751811656 +0000 UTC Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.526341 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.605039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e13ba74f03d1ced7fd4b90dd7611825963d84e4cfb54afaff5ab48585584d79"} Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.605182 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.605226 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.605262 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.605317 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.607788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.608049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.608072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.736995 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.906364 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.907948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.908006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.908025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:46 crc kubenswrapper[4812]: I0218 16:29:46.908060 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.445470 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:39:02.207678807 +0000 UTC Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.607953 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.607983 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.608034 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.608974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.609081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.609113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.609416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.609470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.609493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:47 crc kubenswrapper[4812]: I0218 16:29:47.865224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.081550 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.446404 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:18:30.515986299 +0000 UTC Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.567575 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.567942 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.569733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.569782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.569795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.610215 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.610317 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.611514 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:48 crc kubenswrapper[4812]: I0218 16:29:48.673441 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:29:49 crc kubenswrapper[4812]: I0218 16:29:49.446539 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:03:42.560376641 +0000 UTC Feb 18 16:29:49 crc kubenswrapper[4812]: I0218 16:29:49.613687 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:49 crc kubenswrapper[4812]: I0218 16:29:49.615500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:49 crc kubenswrapper[4812]: I0218 16:29:49.615537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:49 crc kubenswrapper[4812]: I0218 16:29:49.615548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.013440 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.013677 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.015199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.015247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.015266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.447479 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:22:41.611180736 +0000 UTC Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.556731 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.557023 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.558778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.558853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:50 crc kubenswrapper[4812]: I0218 16:29:50.558883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:50 crc kubenswrapper[4812]: E0218 16:29:50.598059 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 16:29:51 crc kubenswrapper[4812]: I0218 16:29:51.448450 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:30:24.397832977 +0000 UTC Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.288282 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.288570 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.290203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.290267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.290290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.296860 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.449602 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:31:12.464751143 +0000 UTC Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.635962 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.637980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.638030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:52 crc kubenswrapper[4812]: I0218 16:29:52.638042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:53 crc kubenswrapper[4812]: I0218 16:29:53.014403 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 16:29:53 crc kubenswrapper[4812]: I0218 16:29:53.014495 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 16:29:53 crc kubenswrapper[4812]: I0218 16:29:53.450433 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:27:37.835582844 +0000 UTC Feb 18 16:29:54 crc kubenswrapper[4812]: I0218 16:29:54.451396 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:37:54.192603303 +0000 UTC Feb 18 16:29:54 crc kubenswrapper[4812]: W0218 16:29:54.723904 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 16:29:54 crc kubenswrapper[4812]: I0218 16:29:54.724039 4812 trace.go:236] Trace[1414259215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 16:29:44.722) (total time: 10001ms): Feb 18 16:29:54 crc kubenswrapper[4812]: Trace[1414259215]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:29:54.723) Feb 18 16:29:54 crc kubenswrapper[4812]: Trace[1414259215]: [10.001283837s] [10.001283837s] END Feb 18 16:29:54 crc kubenswrapper[4812]: E0218 16:29:54.724068 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 16:29:55 crc kubenswrapper[4812]: I0218 16:29:55.288697 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 16:29:55 crc kubenswrapper[4812]: I0218 16:29:55.288798 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 16:29:55 crc kubenswrapper[4812]: I0218 16:29:55.297250 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 18 16:29:55 crc kubenswrapper[4812]: I0218 16:29:55.297314 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 16:29:55 crc kubenswrapper[4812]: I0218 16:29:55.452342 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:26:03.627168406 +0000 UTC Feb 18 16:29:56 crc kubenswrapper[4812]: I0218 16:29:56.453214 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:57:20.577672722 +0000 UTC Feb 18 16:29:56 crc kubenswrapper[4812]: I0218 16:29:56.532215 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]log ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]etcd ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-filter ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-apiextensions-informers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-apiextensions-controllers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/crd-informer-synced ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-system-namespaces-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/bootstrap-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/start-kube-aggregator-informers ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-registration-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-discovery-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]autoregister-completion ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-openapi-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 18 16:29:56 crc kubenswrapper[4812]: livez check failed Feb 18 16:29:56 crc kubenswrapper[4812]: I0218 16:29:56.532291 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:29:57 crc kubenswrapper[4812]: I0218 16:29:57.454507 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:26:33.364765757 +0000 UTC Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.123279 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.123960 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.127066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.127169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.127191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.140779 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.455309 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:40:47.724598196 +0000 UTC Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.691126 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.692294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.692336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:29:58 crc kubenswrapper[4812]: I0218 16:29:58.692347 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:29:59 crc kubenswrapper[4812]: I0218 16:29:59.456332 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:56:43.845143727 +0000 UTC Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.294834 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.296581 4812 trace.go:236] Trace[1309625948]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 16:29:49.268) (total time: 11028ms): Feb 18 16:30:00 crc kubenswrapper[4812]: Trace[1309625948]: ---"Objects listed" error: 11028ms (16:30:00.296) Feb 18 16:30:00 crc kubenswrapper[4812]: Trace[1309625948]: [11.028336363s] [11.028336363s] END Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.296863 4812 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.297468 4812 trace.go:236] Trace[1316083303]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 16:29:47.158) (total time: 13138ms): Feb 18 16:30:00 crc kubenswrapper[4812]: Trace[1316083303]: ---"Objects listed" error: 13138ms (16:30:00.297) Feb 18 16:30:00 crc kubenswrapper[4812]: Trace[1316083303]: [13.138877812s] [13.138877812s] END Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.297640 4812 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.300245 4812 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.300481 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.300541 4812 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.308937 4812 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.350796 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.356728 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.427269 4812 apiserver.go:52] "Watching apiserver" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.429750 4812 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.430374 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.430954 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.431900 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.432078 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.432302 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.432351 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.432355 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.432403 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.432303 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.432600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434314 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434342 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434530 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434719 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434849 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.434888 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.435333 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.435566 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.443785 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.444342 4812 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.457332 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:27:14.469139997 +0000 UTC Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.473637 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.488248 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.499933 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501508 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501590 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501670 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501745 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501828 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501899 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.501967 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502199 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502291 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502365 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502433 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502592 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502664 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502734 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502810 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502881 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502951 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503018 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503113 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503190 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503261 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503335 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503482 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503559 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503634 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503709 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502672 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503961 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502813 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502866 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503934 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504067 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504153 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504176 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504200 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504227 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504276 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504298 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504320 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504343 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.502887 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503035 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503179 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503313 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503479 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504425 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504535 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504571 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504589 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504613 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504676 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504697 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504716 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504736 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504755 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504794 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504832 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504850 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504866 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504939 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504966 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504989 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.505051 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.505077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506981 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507076 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507127 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507166 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507199 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507260 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507284 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507307 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507332 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507391 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507426 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507452 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507484 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507511 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507535 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507559 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507585 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507633 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507670 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507724 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507747 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507771 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507801 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507866 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507892 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507926 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507949 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507972 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507993 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508016 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508041 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508066 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508090 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508167 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508254 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508308 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508332 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508358 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508409 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508433 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508458 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508480 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508546 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508590 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508613 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508663 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508688 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508736 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508837 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508859 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508934 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508961 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509079 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509120 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510116 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510147 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504989 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.505153 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503697 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503750 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503770 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503838 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503926 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510292 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504155 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504254 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504351 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504399 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.504396 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503718 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506224 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.503678 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.506723 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507070 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507433 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507472 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507667 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.507739 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508357 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.508916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509326 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.509614 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510991 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510995 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.511052 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.511083 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.511127 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.511566 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.511904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.512054 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.512168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.512242 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513149 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513167 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513342 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513835 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.513861 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.514175 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.514491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.514834 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.514932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.510175 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.514932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515161 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515224 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515244 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515265 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515305 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515347 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515366 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515401 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515418 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515439 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515466 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515517 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515492 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515658 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515678 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515801 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.516600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.516754 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.516910 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.516945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.517788 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.517893 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.515823 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518086 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518144 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518165 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518184 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518201 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518222 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518239 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518256 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518277 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518297 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518318 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518336 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518356 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518304 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518374 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518390 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518464 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518495 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518576 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518618 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518661 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518685 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518728 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518844 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.518977 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519068 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519167 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519211 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519319 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519334 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519349 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519344 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519361 4812 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519407 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519442 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519465 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519482 4812 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.519498 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.520903 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:01.020856503 +0000 UTC m=+21.286467412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521362 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521388 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521404 4812 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521416 4812 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521434 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521451 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521463 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521471 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521482 4812 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521494 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521505 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521515 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521526 4812 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521536 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521546 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521557 4812 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521568 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521578 4812 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521587 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521598 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521608 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521618 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521631 4812 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521641 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521651 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521661 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521673 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521686 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521697 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521706 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521720 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521733 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521753 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521766 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.521229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.530905 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.531471 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.531864 4812 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.535076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.535150 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.535220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.535609 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.535664 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.536482 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.536563 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.536635 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.536805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.537576 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.539468 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541502 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541618 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541695 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541873 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.541995 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.542032 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.542291 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.545741 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.547242 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.547780 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.548916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.550199 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.550324 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.550446 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.550742 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.551117 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.551442 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.551488 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552090 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552235 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552291 4812 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552313 4812 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552329 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552344 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552358 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552373 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552389 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552403 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552416 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552433 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552446 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552459 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552474 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552487 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552502 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552516 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552528 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552541 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552554 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552568 4812 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552580 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552593 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552608 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552624 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552648 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552662 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552676 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552691 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552709 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552726 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552741 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552754 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552768 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552782 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552795 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552809 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552822 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552837 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552875 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552889 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552904 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.552917 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.553273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.555882 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.556300 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.557560 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:01.057520964 +0000 UTC m=+21.323131883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.557931 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:01.057587275 +0000 UTC m=+21.323198184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.558037 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:01.058027706 +0000 UTC m=+21.323638825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.558432 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.558581 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.560503 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.560599 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.560645 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.560748 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.560880 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.562432 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.562451 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.560986 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.561028 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.561391 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.561421 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.561687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.562153 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.562769 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.562867 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.562993 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563078 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563177 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.563412 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:01.063391583 +0000 UTC m=+21.329002492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563866 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.563888 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564040 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564191 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564207 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564233 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564457 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564466 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564764 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.564893 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.565223 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.565288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.565453 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.565658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.566413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.567507 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.567681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.567938 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.568157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.568465 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.569188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.577998 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.580051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.582838 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.582858 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.583338 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.583587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.583960 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.584079 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.584265 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.584480 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.584814 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585039 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585354 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41888->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585325 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41896->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585419 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41888->192.168.126.11:17697: read: connection reset by peer" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585455 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41896->192.168.126.11:17697: read: connection reset by peer" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.585621 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586589 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586725 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586956 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586956 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.586980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.587928 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.588132 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.588111 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.589139 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.591243 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.592134 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.592593 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.596711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.596945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.598334 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.598816 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.599721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.599757 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.599928 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.600053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.600184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.600497 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.600592 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.600598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.603184 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.604157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.604412 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.604650 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.604695 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.605010 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.605369 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.606792 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.606819 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.611801 4812 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.611954 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.617799 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.618534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.620201 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.621014 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.621443 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.621547 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.629663 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.630812 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.631674 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.634568 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.634724 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.635732 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.637434 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.638400 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.640429 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.640804 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.642065 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.643256 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.643880 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.644969 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.645557 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.646765 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.647367 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.648289 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.648890 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.649522 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.650621 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.650800 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.651183 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.653701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.653801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.653955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654219 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654246 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654261 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654274 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654287 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654337 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654368 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654383 4812 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654394 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654410 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654424 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654455 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654469 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654481 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654513 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654525 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654536 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654549 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654561 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654571 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654626 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654668 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654683 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654697 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654712 4812 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654727 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654744 4812 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654758 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654773 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654786 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654800 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654816 4812 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654832 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654848 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654862 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654875 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654889 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654902 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654915 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654966 4812 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654982 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.654996 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655034 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655050 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655063 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655076 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655089 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655142 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655154 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655166 4812 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655178 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655214 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655233 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655296 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655379 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655400 4812 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655415 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655429 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655469 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655483 4812 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655571 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655586 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655598 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655610 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655628 4812 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655642 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655680 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655715 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655735 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655773 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655787 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655800 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655983 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.655997 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656010 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656025 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656038 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656051 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656064 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656079 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656091 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656243 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656256 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656266 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656276 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656286 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656297 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656308 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656318 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656328 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656338 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656350 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656360 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656370 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656379 4812 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656389 4812 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656399 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656410 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656420 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656430 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656441 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656451 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656461 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656471 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656482 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656492 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656502 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.656512 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.660350 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.671061 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.680862 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.690158 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.698322 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.699069 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.701420 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" exitCode=255 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.701498 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576"} Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.701670 4812 scope.go:117] "RemoveContainer" containerID="bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.716417 4812 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.720415 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.738844 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.739718 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.739948 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.744624 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.756517 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.761877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.776422 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.782783 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.790337 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.790752 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 16:30:00 crc kubenswrapper[4812]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 18 16:30:00 crc kubenswrapper[4812]: set -o allexport Feb 18 16:30:00 crc kubenswrapper[4812]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 18 16:30:00 crc kubenswrapper[4812]: source /etc/kubernetes/apiserver-url.env Feb 18 16:30:00 crc kubenswrapper[4812]: else Feb 18 16:30:00 crc kubenswrapper[4812]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 18 16:30:00 crc kubenswrapper[4812]: exit 1 Feb 18 16:30:00 crc kubenswrapper[4812]: fi Feb 18 16:30:00 crc kubenswrapper[4812]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 18 16:30:00 crc kubenswrapper[4812]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 16:30:00 crc kubenswrapper[4812]: > logger="UnhandledError" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.794192 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.800788 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: W0218 16:30:00.801800 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9ed90e54d09a0ce5679840677187ee807fa73509a498510a38bcd1b84344bb42 WatchSource:0}: Error finding container 9ed90e54d09a0ce5679840677187ee807fa73509a498510a38bcd1b84344bb42: Status 404 returned error can't find the container with id 9ed90e54d09a0ce5679840677187ee807fa73509a498510a38bcd1b84344bb42 Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.813629 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 16:30:00 crc kubenswrapper[4812]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 18 16:30:00 crc kubenswrapper[4812]: if [[ -f "/env/_master" ]]; then Feb 18 16:30:00 crc kubenswrapper[4812]: set -o allexport Feb 18 16:30:00 crc kubenswrapper[4812]: source "/env/_master" Feb 18 16:30:00 crc kubenswrapper[4812]: set +o allexport Feb 18 16:30:00 crc kubenswrapper[4812]: fi Feb 18 16:30:00 crc kubenswrapper[4812]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 18 16:30:00 crc kubenswrapper[4812]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 18 16:30:00 crc kubenswrapper[4812]: ho_enable="--enable-hybrid-overlay" Feb 18 16:30:00 crc kubenswrapper[4812]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 18 16:30:00 crc kubenswrapper[4812]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 18 16:30:00 crc kubenswrapper[4812]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 18 16:30:00 crc kubenswrapper[4812]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 16:30:00 crc kubenswrapper[4812]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 18 16:30:00 crc kubenswrapper[4812]: --webhook-host=127.0.0.1 \ Feb 18 16:30:00 crc kubenswrapper[4812]: --webhook-port=9743 \ Feb 18 16:30:00 crc kubenswrapper[4812]: ${ho_enable} \ Feb 18 16:30:00 crc kubenswrapper[4812]: --enable-interconnect \ Feb 18 16:30:00 crc kubenswrapper[4812]: --disable-approver \ Feb 18 16:30:00 crc kubenswrapper[4812]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 18 16:30:00 crc kubenswrapper[4812]: --wait-for-kubernetes-api=200s \ Feb 18 16:30:00 crc kubenswrapper[4812]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 18 16:30:00 crc kubenswrapper[4812]: --loglevel="${LOGLEVEL}" Feb 18 16:30:00 crc kubenswrapper[4812]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 16:30:00 crc kubenswrapper[4812]: > logger="UnhandledError" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.818699 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 16:30:00 crc kubenswrapper[4812]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 18 16:30:00 crc kubenswrapper[4812]: if [[ -f "/env/_master" ]]; then Feb 18 16:30:00 crc kubenswrapper[4812]: set -o allexport Feb 18 16:30:00 crc kubenswrapper[4812]: source "/env/_master" Feb 18 16:30:00 crc kubenswrapper[4812]: set +o allexport Feb 18 16:30:00 crc kubenswrapper[4812]: fi Feb 18 16:30:00 crc kubenswrapper[4812]: Feb 18 16:30:00 crc kubenswrapper[4812]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 18 16:30:00 crc kubenswrapper[4812]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 16:30:00 crc kubenswrapper[4812]: --disable-webhook \ Feb 18 16:30:00 crc kubenswrapper[4812]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 18 16:30:00 crc kubenswrapper[4812]: --loglevel="${LOGLEVEL}" Feb 18 16:30:00 crc kubenswrapper[4812]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 16:30:00 crc kubenswrapper[4812]: > logger="UnhandledError" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.818913 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.823009 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 18 16:30:00 crc kubenswrapper[4812]: E0218 16:30:00.823077 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.823147 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.847508 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.856437 4812 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.918390 4812 csr.go:261] certificate signing request csr-5s5zd is approved, waiting to be issued Feb 18 16:30:00 crc kubenswrapper[4812]: I0218 16:30:00.937249 4812 csr.go:257] certificate signing request csr-5s5zd is issued Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.058545 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.058634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.058671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.058698 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058796 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:02.058757603 +0000 UTC m=+22.324368512 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058842 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058892 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058896 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058973 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:02.058941287 +0000 UTC m=+22.324552326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058978 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.058997 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:02.058989958 +0000 UTC m=+22.324600867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.059009 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.059049 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:02.059036139 +0000 UTC m=+22.324647058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.160783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.161023 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.161058 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.161078 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.161178 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:02.161153854 +0000 UTC m=+22.426764763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.457766 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:05:30.589421181 +0000 UTC Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.534637 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.549928 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.560836 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.574063 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.588541 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.599736 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.614833 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:29:44Z\\\",\\\"message\\\":\\\"W0218 16:29:43.706327 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 16:29:43.706712 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771432183 cert, and key in /tmp/serving-cert-1160477269/serving-signer.crt, /tmp/serving-cert-1160477269/serving-signer.key\\\\nI0218 16:29:44.240894 1 observer_polling.go:159] Starting file observer\\\\nW0218 16:29:44.250197 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 16:29:44.250366 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:44.250911 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1160477269/tls.crt::/tmp/serving-cert-1160477269/tls.key\\\\\\\"\\\\nF0218 16:29:44.469296 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.631974 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.643795 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.706063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e90e8c0fd50d8d6d947d726bb19187f1b4cacde1c64b14529938bd37afed5859"} Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.708000 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.715013 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:01 crc kubenswrapper[4812]: E0218 16:30:01.715172 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.715711 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8fc628eab332a86cfcec68f6c1086fce32a7fc64bc1b38c71e0ff447b57b5141"} Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.718600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9ed90e54d09a0ce5679840677187ee807fa73509a498510a38bcd1b84344bb42"} Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.719216 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.723925 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.740820 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.754000 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.770169 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad83b0635894222078732ce7208c1aa46a9b2454f7b4f9583c98fb91ebdea18\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:29:44Z\\\",\\\"message\\\":\\\"W0218 16:29:43.706327 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 16:29:43.706712 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771432183 cert, and key in /tmp/serving-cert-1160477269/serving-signer.crt, /tmp/serving-cert-1160477269/serving-signer.key\\\\nI0218 16:29:44.240894 1 observer_polling.go:159] Starting file observer\\\\nW0218 16:29:44.250197 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 16:29:44.250366 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:44.250911 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1160477269/tls.crt::/tmp/serving-cert-1160477269/tls.key\\\\\\\"\\\\nF0218 16:29:44.469296 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.785169 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.797474 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.810009 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.820379 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.859263 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.885020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.901370 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.922225 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.938127 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 16:25:00 +0000 UTC, rotation deadline is 2026-11-24 13:48:46.661551954 +0000 UTC Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.938276 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6693h18m44.723280872s for next certificate rotation Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.943781 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.955146 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.967231 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:01 crc kubenswrapper[4812]: I0218 16:30:01.983232 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.067529 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.067638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.067687 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.067717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.067924 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.067958 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.067973 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068044 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:04.068021942 +0000 UTC m=+24.333632851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068294 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068337 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068410 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:04.068396421 +0000 UTC m=+24.334007340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068462 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:04.068451292 +0000 UTC m=+24.334062201 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.068477 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:04.068469263 +0000 UTC m=+24.334080172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.149838 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-prrcg"] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.150211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.151240 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-962hh"] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.151837 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.152878 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.153654 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.154323 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.154523 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.154866 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.155077 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.155299 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.156665 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.168142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.168297 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.168319 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.168331 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.168376 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:04.168362884 +0000 UTC m=+24.433973793 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.168737 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.185419 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.198446 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.215586 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.231688 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.244737 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.258401 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.268962 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-netns\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-hostroot\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269043 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-conf-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-multus-certs\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-hosts-file\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269178 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-socket-dir-parent\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-daemon-config\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269265 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-os-release\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-bin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269319 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-multus\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-etc-kubernetes\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269409 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmz76\" (UniqueName: \"kubernetes.io/projected/cf2b75a7-be08-4a51-b100-9a75359bbd18-kube-api-access-gmz76\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269433 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgcg4\" (UniqueName: \"kubernetes.io/projected/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-kube-api-access-jgcg4\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269478 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-cnibin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269501 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-kubelet\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-system-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-cni-binary-copy\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.269622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-k8s-cni-cncf-io\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.275506 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.288830 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.305022 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.319447 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.331122 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.342173 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.349883 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.359439 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.369571 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.370845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.370898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-os-release\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.370936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-bin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.370966 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-multus\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.370997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-etc-kubernetes\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371053 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmz76\" (UniqueName: \"kubernetes.io/projected/cf2b75a7-be08-4a51-b100-9a75359bbd18-kube-api-access-gmz76\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371083 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgcg4\" (UniqueName: \"kubernetes.io/projected/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-kube-api-access-jgcg4\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-cnibin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-kubelet\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-system-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-cni-binary-copy\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371329 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-k8s-cni-cncf-io\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-hostroot\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-netns\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371437 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-conf-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-multus-certs\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371513 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-hosts-file\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-socket-dir-parent\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.371581 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-daemon-config\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.372612 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-daemon-config\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.372840 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.372983 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-system-cni-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373025 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-bin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-cni-multus\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-etc-kubernetes\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373351 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-os-release\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-conf-dir\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373551 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-cnibin\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-k8s-cni-cncf-io\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-var-lib-kubelet\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-hostroot\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373674 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-netns\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-hosts-file\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373713 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-host-run-multus-certs\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.373778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf2b75a7-be08-4a51-b100-9a75359bbd18-multus-socket-dir-parent\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.374073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf2b75a7-be08-4a51-b100-9a75359bbd18-cni-binary-copy\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.386322 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.394498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmz76\" (UniqueName: \"kubernetes.io/projected/cf2b75a7-be08-4a51-b100-9a75359bbd18-kube-api-access-gmz76\") pod \"multus-prrcg\" (UID: \"cf2b75a7-be08-4a51-b100-9a75359bbd18\") " pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.395126 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgcg4\" (UniqueName: \"kubernetes.io/projected/bf2d986a-6ff1-4ee6-9dd4-939aa0866efc-kube-api-access-jgcg4\") pod \"node-resolver-962hh\" (UID: \"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\") " pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.396268 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.420388 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.457912 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:19:36.741541272 +0000 UTC Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.471206 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-prrcg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.481620 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-962hh" Feb 18 16:30:02 crc kubenswrapper[4812]: W0218 16:30:02.484881 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf2b75a7_be08_4a51_b100_9a75359bbd18.slice/crio-dcf2c73b600c5d48278888e9af892dc1b4efc942ecbc33de3102c1b4425da747 WatchSource:0}: Error finding container dcf2c73b600c5d48278888e9af892dc1b4efc942ecbc33de3102c1b4425da747: Status 404 returned error can't find the container with id dcf2c73b600c5d48278888e9af892dc1b4efc942ecbc33de3102c1b4425da747 Feb 18 16:30:02 crc kubenswrapper[4812]: W0218 16:30:02.501982 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf2d986a_6ff1_4ee6_9dd4_939aa0866efc.slice/crio-8ada28cc7a7275553b859d1df4222b17e706648e741e4bcfe1d0cb729da3af3d WatchSource:0}: Error finding container 8ada28cc7a7275553b859d1df4222b17e706648e741e4bcfe1d0cb729da3af3d: Status 404 returned error can't find the container with id 8ada28cc7a7275553b859d1df4222b17e706648e741e4bcfe1d0cb729da3af3d Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.507879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.507984 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.508072 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.507879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.508191 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.508245 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.512812 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.513403 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.514753 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.515411 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.516975 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.518024 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.519363 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.520354 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.521972 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.522658 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.523609 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.524398 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.525564 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.526065 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.527001 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.527546 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.528149 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.529000 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.529481 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.530914 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.531560 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.531956 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hhkxg"] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.533591 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v49jp"] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.533724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.534882 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mfnkd"] Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.535186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.536530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.538301 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539258 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539327 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539388 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539457 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539498 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539351 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539580 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539662 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539700 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539736 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539832 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539854 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.539865 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.558145 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.583582 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.598497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.620521 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.635549 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.653462 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.671390 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674278 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674295 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fr4\" (UniqueName: \"kubernetes.io/projected/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-kube-api-access-l6fr4\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-system-cni-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674372 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bc4da39-1fda-4604-a089-b90b684c8a46-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674676 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmmc\" (UniqueName: \"kubernetes.io/projected/4bc4da39-1fda-4604-a089-b90b684c8a46-kube-api-access-6lmmc\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bc4da39-1fda-4604-a089-b90b684c8a46-rootfs\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bc4da39-1fda-4604-a089-b90b684c8a46-proxy-tls\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674747 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674832 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cnibin\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-os-release\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674932 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.674972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.675014 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.675032 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.679509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.679546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.679598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.679622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.679652 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqnv\" (UniqueName: \"kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.691455 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.708115 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.721963 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerStarted","Data":"796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.722026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerStarted","Data":"dcf2c73b600c5d48278888e9af892dc1b4efc942ecbc33de3102c1b4425da747"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.724365 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.725110 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.725172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.730487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.733475 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-962hh" event={"ID":"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc","Type":"ContainerStarted","Data":"8ada28cc7a7275553b859d1df4222b17e706648e741e4bcfe1d0cb729da3af3d"} Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.734012 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:02 crc kubenswrapper[4812]: E0218 16:30:02.734190 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.741666 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.760577 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.775262 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780628 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bc4da39-1fda-4604-a089-b90b684c8a46-rootfs\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bc4da39-1fda-4604-a089-b90b684c8a46-proxy-tls\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780669 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmmc\" (UniqueName: \"kubernetes.io/projected/4bc4da39-1fda-4604-a089-b90b684c8a46-kube-api-access-6lmmc\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780689 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780723 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cnibin\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-os-release\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bc4da39-1fda-4604-a089-b90b684c8a46-rootfs\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780824 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780778 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780947 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780978 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.780995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqnv\" (UniqueName: \"kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781134 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781531 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781609 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781721 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-os-release\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781789 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781793 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781810 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cnibin\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-binary-copy\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fr4\" (UniqueName: \"kubernetes.io/projected/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-kube-api-access-l6fr4\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.781971 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-system-cni-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bc4da39-1fda-4604-a089-b90b684c8a46-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782138 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-system-cni-dir\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782265 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782452 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782476 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782622 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.782766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bc4da39-1fda-4604-a089-b90b684c8a46-mcd-auth-proxy-config\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.785670 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bc4da39-1fda-4604-a089-b90b684c8a46-proxy-tls\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.785993 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.807994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmmc\" (UniqueName: \"kubernetes.io/projected/4bc4da39-1fda-4604-a089-b90b684c8a46-kube-api-access-6lmmc\") pod \"machine-config-daemon-hhkxg\" (UID: \"4bc4da39-1fda-4604-a089-b90b684c8a46\") " pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.808703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fr4\" (UniqueName: \"kubernetes.io/projected/9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696-kube-api-access-l6fr4\") pod \"multus-additional-cni-plugins-mfnkd\" (UID: \"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\") " pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.809624 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqnv\" (UniqueName: \"kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv\") pod \"ovnkube-node-v49jp\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.813782 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.832453 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.850280 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.864686 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.880376 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.882792 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:30:02 crc kubenswrapper[4812]: W0218 16:30:02.893942 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc4da39_1fda_4604_a089_b90b684c8a46.slice/crio-18b6af777527e2592cdc8b64c405f46c15d0623f46101ebcfb70dc39cd8346f2 WatchSource:0}: Error finding container 18b6af777527e2592cdc8b64c405f46c15d0623f46101ebcfb70dc39cd8346f2: Status 404 returned error can't find the container with id 18b6af777527e2592cdc8b64c405f46c15d0623f46101ebcfb70dc39cd8346f2 Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.894639 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.906697 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.909597 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.918031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.921393 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: W0218 16:30:02.937023 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8bd0ec_00c8_4cc8_a689_073a151689d5.slice/crio-a6a2b94809321961d3e59d1ad259af430c3708072ba350402bf698c81b7eed06 WatchSource:0}: Error finding container a6a2b94809321961d3e59d1ad259af430c3708072ba350402bf698c81b7eed06: Status 404 returned error can't find the container with id a6a2b94809321961d3e59d1ad259af430c3708072ba350402bf698c81b7eed06 Feb 18 16:30:02 crc kubenswrapper[4812]: W0218 16:30:02.938980 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d93bfc1_eb9b_4ca5_b9b4_83b5ebf01696.slice/crio-91a7790e3311ecd37141e608d5e69a8f4d4aea42bb2695531f2da8eed1c096e5 WatchSource:0}: Error finding container 91a7790e3311ecd37141e608d5e69a8f4d4aea42bb2695531f2da8eed1c096e5: Status 404 returned error can't find the container with id 91a7790e3311ecd37141e608d5e69a8f4d4aea42bb2695531f2da8eed1c096e5 Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.940820 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.958020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:02 crc kubenswrapper[4812]: I0218 16:30:02.977763 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:02Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.458251 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:51:14.795374395 +0000 UTC Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.743129 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" exitCode=0 Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.743215 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.743252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"a6a2b94809321961d3e59d1ad259af430c3708072ba350402bf698c81b7eed06"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.747254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.747318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.747353 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"18b6af777527e2592cdc8b64c405f46c15d0623f46101ebcfb70dc39cd8346f2"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.749306 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e" exitCode=0 Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.749426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.749451 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerStarted","Data":"91a7790e3311ecd37141e608d5e69a8f4d4aea42bb2695531f2da8eed1c096e5"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.753149 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:03 crc kubenswrapper[4812]: E0218 16:30:03.753313 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.753801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-962hh" event={"ID":"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc","Type":"ContainerStarted","Data":"50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce"} Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.784356 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.805320 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.818021 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.832384 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.856381 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.874834 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.892837 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.909201 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.926901 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.942997 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.965437 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:03 crc kubenswrapper[4812]: I0218 16:30:03.983276 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.000705 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.018025 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.042646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.063482 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.103482 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.104357 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104552 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:08.104517936 +0000 UTC m=+28.370128855 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.104667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.104715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.104752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104822 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104891 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104902 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104926 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104944 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.104912 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:08.104893215 +0000 UTC m=+28.370504124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.105018 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:08.105000608 +0000 UTC m=+28.370611517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.105038 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:08.105029118 +0000 UTC m=+28.370640237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.121512 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.134175 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.148127 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.161945 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.174584 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.193397 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.205486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.205700 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.205763 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.205779 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.205862 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:08.205835931 +0000 UTC m=+28.471446990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.212250 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.229323 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.243843 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.459026 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:03:25.277286355 +0000 UTC Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.507785 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.507932 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.508331 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.508391 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.508441 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:04 crc kubenswrapper[4812]: E0218 16:30:04.508486 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.758659 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692" exitCode=0 Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.758721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692"} Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.760960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684"} Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.766354 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.766424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.766440 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.778817 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.805550 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.825393 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.843490 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.860466 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.875576 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.891067 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.907827 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.933912 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.951854 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.967485 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:04 crc kubenswrapper[4812]: I0218 16:30:04.988501 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.004635 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.022775 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.043486 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.064665 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.084822 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.101636 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.116473 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.135500 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.151194 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.170047 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.183172 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.200497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.214398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.228136 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.459688 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:06:03.942669954 +0000 UTC Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.557842 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qhqsd"] Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.558514 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.560443 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.561022 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.561284 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.561444 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.574884 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.597281 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.611796 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.618053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc358ff-525d-49c3-b049-35d6ffea063f-host\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.618220 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7rbw\" (UniqueName: \"kubernetes.io/projected/2cc358ff-525d-49c3-b049-35d6ffea063f-kube-api-access-j7rbw\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.618387 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc358ff-525d-49c3-b049-35d6ffea063f-serviceca\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.626762 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.641930 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.655258 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.669063 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.681741 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.694085 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.705945 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.713767 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.718865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc358ff-525d-49c3-b049-35d6ffea063f-host\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.718897 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7rbw\" (UniqueName: \"kubernetes.io/projected/2cc358ff-525d-49c3-b049-35d6ffea063f-kube-api-access-j7rbw\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.718928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc358ff-525d-49c3-b049-35d6ffea063f-serviceca\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.720118 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc358ff-525d-49c3-b049-35d6ffea063f-host\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.726987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc358ff-525d-49c3-b049-35d6ffea063f-serviceca\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.736668 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.745931 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7rbw\" (UniqueName: \"kubernetes.io/projected/2cc358ff-525d-49c3-b049-35d6ffea063f-kube-api-access-j7rbw\") pod \"node-ca-qhqsd\" (UID: \"2cc358ff-525d-49c3-b049-35d6ffea063f\") " pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.753889 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.768716 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.774931 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.774982 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.774992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.778508 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb" exitCode=0 Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.778574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb"} Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.801482 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.824038 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.842533 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.859442 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.872183 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qhqsd" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.872455 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.912277 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.941132 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:05 crc kubenswrapper[4812]: I0218 16:30:05.990133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.024960 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.062034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.102977 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.147141 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.186698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.237607 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.460746 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:09:25.277538885 +0000 UTC Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.507152 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.507243 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.507297 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.507296 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.507446 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.507588 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.701037 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.703696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.703729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.703741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.703879 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.710835 4812 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.711188 4812 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.712778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.712813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.712825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.712841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.712855 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.728397 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.733727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.733778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.733792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.733815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.733831 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.747359 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.751689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.751719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.751729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.751746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.751758 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.766247 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.770517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.771059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.771282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.771297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.771307 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.785398 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1" exitCode=0 Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.785466 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1"} Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.785710 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.787735 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qhqsd" event={"ID":"2cc358ff-525d-49c3-b049-35d6ffea063f","Type":"ContainerStarted","Data":"27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.787782 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qhqsd" event={"ID":"2cc358ff-525d-49c3-b049-35d6ffea063f","Type":"ContainerStarted","Data":"61d8a7cf5a8070d7834e62fe51ad52cce73e9ff4b254928f3a96c8c615184131"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.791566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.791603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.791616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.791633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.791648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.800352 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.804811 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: E0218 16:30:06.804929 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.806448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.806472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.806481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.806495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.806505 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.810740 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.821997 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.833383 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.848486 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.864740 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.883808 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.897946 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.909974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.910041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.910054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.910075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.910089 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:06Z","lastTransitionTime":"2026-02-18T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.912114 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.927338 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.948616 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.967485 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.981426 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:06 crc kubenswrapper[4812]: I0218 16:30:06.994939 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:06Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.014665 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.017201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.018498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.018515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.018541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.018556 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.034126 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.054795 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.074456 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.086395 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.100562 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.113782 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.122055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.122130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.122145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.122168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.122186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.144732 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.186656 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.224656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.224700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.224710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.224731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.224746 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.229458 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.268763 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.309859 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.328750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.328933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.329456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.329517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.329545 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.353274 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.388070 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.433148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.433202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.433215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.433236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.433251 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.461771 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:48:38.70473523 +0000 UTC Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.536864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.536910 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.536926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.536947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.536961 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.640636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.640745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.640774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.640817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.640840 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.743388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.743441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.743453 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.743470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.743480 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.792274 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec" exitCode=0 Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.792341 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.797347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.816524 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.841611 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.845512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.845581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.845601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.845632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.845656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.862967 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.877849 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.890886 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.901059 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.948729 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.951736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.951755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.951765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.951783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.951795 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:07Z","lastTransitionTime":"2026-02-18T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.964674 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.977567 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:07 crc kubenswrapper[4812]: I0218 16:30:07.993183 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:07Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.005285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.024263 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.038005 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056239 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.056382 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.151054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.151196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.151226 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.151250 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151386 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151404 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151418 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151475 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:16.151456908 +0000 UTC m=+36.417067817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151648 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151692 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:16.151681613 +0000 UTC m=+36.417292522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151830 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:16.151790126 +0000 UTC m=+36.417401065 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.151897 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.152062 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:16.152033022 +0000 UTC m=+36.417643961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.159777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.159811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.159834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.159851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.159861 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.251714 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.251954 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.251988 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.252008 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.252151 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:16.252079107 +0000 UTC m=+36.517690056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.262458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.262531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.262556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.262586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.262610 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.366008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.366074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.366140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.366176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.366196 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.462235 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:52:19.279415494 +0000 UTC Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.470059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.470142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.470173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.470208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.470231 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.507877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.507929 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.507898 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.508133 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.508514 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:08 crc kubenswrapper[4812]: E0218 16:30:08.508616 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.572474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.572548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.572569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.572599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.572625 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.675256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.675307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.675323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.675343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.675356 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.777680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.777751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.777772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.777800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.777820 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.813414 4812 generic.go:334] "Generic (PLEG): container finished" podID="9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696" containerID="093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31" exitCode=0 Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.813482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerDied","Data":"093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.830408 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.854183 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.872219 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.881455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.881503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.881516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.881542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.881773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.886091 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.899616 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.909621 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.925633 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.941777 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.959795 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.972654 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.983161 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.984900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.984971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.984990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.985023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.985048 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:08Z","lastTransitionTime":"2026-02-18T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:08 crc kubenswrapper[4812]: I0218 16:30:08.997611 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.010813 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.025561 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.088653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.088703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.088716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.088742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.088757 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.197786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.197841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.197857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.197885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.197899 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.301549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.301618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.301631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.301650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.301662 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.404616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.404683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.404699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.404720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.404768 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.462528 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:30:05.171104202 +0000 UTC Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.508378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.508427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.508444 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.508466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.508482 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.612333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.612398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.612422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.612515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.612625 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.715637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.715700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.715716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.715740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.715756 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.818682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.818729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.818742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.818762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.818773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.823662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" event={"ID":"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696","Type":"ContainerStarted","Data":"cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.834006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.834420 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.836915 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.837664 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:09 crc kubenswrapper[4812]: E0218 16:30:09.837827 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.859381 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.867071 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.877578 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.882727 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.902463 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.924509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.924570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.924593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.924630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.924652 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:09Z","lastTransitionTime":"2026-02-18T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.927210 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.944212 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.964453 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.982783 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:09 crc kubenswrapper[4812]: I0218 16:30:09.997981 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:09Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.014825 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.029037 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.029162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.029200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.029240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.029271 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.033827 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.049124 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.067563 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.089538 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.109875 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.131276 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.132151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.132192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.132214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.132241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.132261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.148837 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.165675 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.184814 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.233824 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.235290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.235318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.235331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.235350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.235364 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.252628 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.258929 4812 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.259973 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.106:41050->38.102.83.106:6443: use of closed network connection" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.291359 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.302192 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.313056 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.326647 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.338390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.338417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.338426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.338442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.338453 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.346597 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.365401 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.380399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.442333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.442429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.442453 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.442487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.442508 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.463208 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:17:15.743646331 +0000 UTC Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.507496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.507548 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.507604 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:10 crc kubenswrapper[4812]: E0218 16:30:10.507727 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:10 crc kubenswrapper[4812]: E0218 16:30:10.508561 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:10 crc kubenswrapper[4812]: E0218 16:30:10.508836 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.535932 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.546562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.546612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.546627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.546650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.546666 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.563020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.588025 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.613348 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.629007 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.649649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.649723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.649750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.649792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.649823 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.651397 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.676954 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.691901 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.714215 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.733346 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.754296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.754399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.754411 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.754429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.754443 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.755011 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.774899 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.794930 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.817574 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.838897 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.858830 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.858880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.858893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.858911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.858926 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.868924 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.896888 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.945929 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.962366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.962413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.962423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.962443 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.962454 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:10Z","lastTransitionTime":"2026-02-18T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.978006 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:10 crc kubenswrapper[4812]: I0218 16:30:10.993728 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.010395 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.025682 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.037689 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.048537 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.064961 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.065004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.065016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.065034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.065048 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.069063 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.084387 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.096947 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.111624 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.124999 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.143118 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:11Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.168624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.168682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.168695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.168714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.168729 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.272003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.272054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.272065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.272081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.272090 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.375974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.376051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.376070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.376131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.376153 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.463783 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:14:57.984871378 +0000 UTC Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.480562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.480629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.480644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.480666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.480683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.583289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.583376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.583473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.583512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.583533 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.685729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.685759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.685767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.685780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.685789 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.788869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.788948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.789008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.789044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.789067 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.892704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.892793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.892814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.893208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.893505 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.997492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.997559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.997582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.997612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:11 crc kubenswrapper[4812]: I0218 16:30:11.997634 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:11Z","lastTransitionTime":"2026-02-18T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.101856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.101921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.101945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.101972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.101991 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.206517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.206590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.206608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.206634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.206657 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.309476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.309526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.309540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.309561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.309578 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.411590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.411636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.411649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.411670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.411683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.464011 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:10:33.126368275 +0000 UTC Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.507188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.507285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.507337 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:12 crc kubenswrapper[4812]: E0218 16:30:12.507463 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:12 crc kubenswrapper[4812]: E0218 16:30:12.507582 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:12 crc kubenswrapper[4812]: E0218 16:30:12.507732 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.514147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.514193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.514206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.514222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.514234 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.617242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.617669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.617800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.617936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.618056 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.721160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.721211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.721226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.721247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.721260 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.824815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.824858 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.824867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.824881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.824892 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.927713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.928161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.928171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.928190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:12 crc kubenswrapper[4812]: I0218 16:30:12.928201 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:12Z","lastTransitionTime":"2026-02-18T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.031583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.031621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.031632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.031646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.031656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.134577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.134630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.134646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.134666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.134679 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.238875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.238924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.238943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.238962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.238973 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.342006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.342040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.342049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.342061 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.342071 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.444508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.444584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.444603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.444631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.444649 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.464773 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:31:13.621440762 +0000 UTC Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.548876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.548960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.548982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.549013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.549032 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.652423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.652491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.652511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.652536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.652556 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.756372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.756446 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.756467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.756496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.756517 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.853234 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/0.log" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.858667 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1" exitCode=1 Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.858775 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.859132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.859206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.859224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.859577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.859602 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.860333 4812 scope.go:117] "RemoveContainer" containerID="5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.884526 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:13Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.924089 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:13Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI0218 16:30:13.021256 6145 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.021545 6145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022195 6145 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 16:30:13.022400 6145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022398 6145 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.022528 6145 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 16:30:13.023594 6145 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 16:30:13.023643 6145 factory.go:656] Stopping watch factory\\\\nI0218 16:30:13.023660 6145 ovnkube.go:599] Stopped ovnkube\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:13Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.945518 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:13Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966745 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:13Z","lastTransitionTime":"2026-02-18T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.966764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:13Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:13 crc kubenswrapper[4812]: I0218 16:30:13.988893 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:13Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.007690 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.028573 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.049602 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069701 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.069810 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.092848 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.112782 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.136682 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.151322 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.168343 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.172472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.172535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.172546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.172568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.172583 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.276463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.276555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.276579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.276611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.276634 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.380891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.380992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.381030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.381068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.381130 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.464950 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:57:43.743077748 +0000 UTC Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.485075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.485197 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.485217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.485244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.485265 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.507652 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.507771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.507830 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:14 crc kubenswrapper[4812]: E0218 16:30:14.508082 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:14 crc kubenswrapper[4812]: E0218 16:30:14.508535 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:14 crc kubenswrapper[4812]: E0218 16:30:14.508748 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.588781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.588850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.588870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.588900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.588921 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.693224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.693312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.693362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.693403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.693424 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.798281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.798377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.798405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.798441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.798469 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.866799 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/0.log" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.876354 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.878275 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.902588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.902661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.902679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.902705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.902727 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:14Z","lastTransitionTime":"2026-02-18T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.903201 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.937092 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:13Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI0218 16:30:13.021256 6145 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.021545 6145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022195 6145 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 16:30:13.022400 6145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022398 6145 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.022528 6145 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 16:30:13.023594 6145 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 16:30:13.023643 6145 factory.go:656] Stopping watch factory\\\\nI0218 16:30:13.023660 6145 ovnkube.go:599] Stopped ovnkube\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.964489 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:14 crc kubenswrapper[4812]: I0218 16:30:14.989491 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:14Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.005247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.005292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.005306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.005325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.005338 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.014905 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.037606 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.055812 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.075032 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.095553 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.110373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.110439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.110458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.110486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.110509 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.117803 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.135586 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.156950 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.182649 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.211155 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.213279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.213303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.213314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.213337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.213351 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.315741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.315775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.315784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.315799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.315809 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.419520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.419578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.419596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.419624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.419646 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.466515 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:05:26.885105195 +0000 UTC Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.518726 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd"] Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.519282 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522195 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.522941 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.523009 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.542034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.571069 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.595238 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.610311 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.625144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.625191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.625201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.625219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.625233 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.631394 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:13Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI0218 16:30:13.021256 6145 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.021545 6145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022195 6145 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 16:30:13.022400 6145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022398 6145 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.022528 6145 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 16:30:13.023594 6145 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 16:30:13.023643 6145 factory.go:656] Stopping watch factory\\\\nI0218 16:30:13.023660 6145 ovnkube.go:599] Stopped ovnkube\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.645776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.645861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.645891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zgw\" (UniqueName: \"kubernetes.io/projected/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-kube-api-access-57zgw\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.645929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.649145 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.670090 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.686922 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.700669 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.714287 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.727893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.727930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.727942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.727961 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.727974 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.730789 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.743442 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.746909 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.746974 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zgw\" (UniqueName: \"kubernetes.io/projected/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-kube-api-access-57zgw\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.747027 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.747064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.747889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.748047 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.756553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.758853 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.765819 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zgw\" (UniqueName: \"kubernetes.io/projected/ca6eeeea-6618-4c5c-a451-b1b63009ea1b-kube-api-access-57zgw\") pod \"ovnkube-control-plane-749d76644c-rdcwd\" (UID: \"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.780515 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.797206 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.831679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.831809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.831833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.831862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.831885 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.839936 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" Feb 18 16:30:15 crc kubenswrapper[4812]: W0218 16:30:15.854858 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca6eeeea_6618_4c5c_a451_b1b63009ea1b.slice/crio-6d39afc7cf73e2ba203709c30a253e451eeeb92404be22d2b1eedc0db021efb5 WatchSource:0}: Error finding container 6d39afc7cf73e2ba203709c30a253e451eeeb92404be22d2b1eedc0db021efb5: Status 404 returned error can't find the container with id 6d39afc7cf73e2ba203709c30a253e451eeeb92404be22d2b1eedc0db021efb5 Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.883407 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/1.log" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.884325 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/0.log" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.889800 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd" exitCode=1 Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.889906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.889966 4812 scope.go:117] "RemoveContainer" containerID="5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.891161 4812 scope.go:117] "RemoveContainer" containerID="1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd" Feb 18 16:30:15 crc kubenswrapper[4812]: E0218 16:30:15.891451 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.894766 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" event={"ID":"ca6eeeea-6618-4c5c-a451-b1b63009ea1b","Type":"ContainerStarted","Data":"6d39afc7cf73e2ba203709c30a253e451eeeb92404be22d2b1eedc0db021efb5"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.910011 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934406 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934450 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:15Z","lastTransitionTime":"2026-02-18T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.934862 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.956288 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.975620 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:15 crc kubenswrapper[4812]: I0218 16:30:15.990870 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:15Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.007423 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.024951 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.038333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.038383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.038395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.038414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.038426 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.041681 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.061242 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.084058 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.102189 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.121262 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.136898 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.143147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.143186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.143198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.143215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.143226 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.152692 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.152833 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.152874 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.152847085 +0000 UTC m=+52.418458004 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.152917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.152950 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.152978 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.152989 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.152981088 +0000 UTC m=+52.418591997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153215 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153233 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153280 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153296 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153334 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.153305315 +0000 UTC m=+52.418916224 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.153364 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.153343946 +0000 UTC m=+52.418954855 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.164837 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:13Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI0218 16:30:13.021256 6145 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.021545 6145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022195 6145 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 16:30:13.022400 6145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022398 6145 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.022528 6145 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 16:30:13.023594 6145 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 16:30:13.023643 6145 factory.go:656] Stopping watch factory\\\\nI0218 16:30:13.023660 6145 ovnkube.go:599] Stopped ovnkube\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.182309 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.246250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.246295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.246304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.246318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.246330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.254289 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.254780 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.254839 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.254858 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.254939 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.254916498 +0000 UTC m=+52.520527617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.350626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.350679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.350691 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.350711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.350725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.454226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.454290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.454305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.454327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.454343 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.467124 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:49:52.541164889 +0000 UTC Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.507617 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.507668 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.507767 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.507872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.508055 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.508279 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.557897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.557961 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.557976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.557998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.558011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.662112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.662172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.662182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.662202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.662217 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.685873 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5cqfx"] Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.686774 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.686899 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.712254 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.735936 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.753546 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.759696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhbjj\" (UniqueName: \"kubernetes.io/projected/713f6ad5-53d1-453f-a193-e8ab26e31b0e-kube-api-access-lhbjj\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.759887 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.766511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.766562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.766577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.766599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.766613 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.779283 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.802767 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.823144 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.840166 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.860780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.860857 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhbjj\" (UniqueName: \"kubernetes.io/projected/713f6ad5-53d1-453f-a193-e8ab26e31b0e-kube-api-access-lhbjj\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.861368 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.861447 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:17.361425266 +0000 UTC m=+37.627036185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.866419 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5058d086fd7950231a7f82c6d1d059d0e930604fed0789c21936c254cb2189e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:13Z\\\",\\\"message\\\":\\\"/client-go/informers/factory.go:160\\\\nI0218 16:30:13.021256 6145 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.021545 6145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022195 6145 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 16:30:13.022400 6145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:13.022398 6145 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 16:30:13.022528 6145 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 16:30:13.023594 6145 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 16:30:13.023643 6145 factory.go:656] Stopping watch factory\\\\nI0218 16:30:13.023660 6145 ovnkube.go:599] Stopped ovnkube\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.869621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.869664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.869690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.869715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.869731 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.883241 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.884584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhbjj\" (UniqueName: \"kubernetes.io/projected/713f6ad5-53d1-453f-a193-e8ab26e31b0e-kube-api-access-lhbjj\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.899473 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.902159 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/1.log" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.907860 4812 scope.go:117] "RemoveContainer" containerID="1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.908159 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.908721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" event={"ID":"ca6eeeea-6618-4c5c-a451-b1b63009ea1b","Type":"ContainerStarted","Data":"5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.908775 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" event={"ID":"ca6eeeea-6618-4c5c-a451-b1b63009ea1b","Type":"ContainerStarted","Data":"8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.919638 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.934755 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.945660 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.956524 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.958402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.958475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.958495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.958524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.958545 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.972001 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.973845 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.978625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.978693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.978715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.978820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.978852 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:16Z","lastTransitionTime":"2026-02-18T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:16 crc kubenswrapper[4812]: I0218 16:30:16.987286 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:16 crc kubenswrapper[4812]: E0218 16:30:16.994978 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:16Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.000918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.000999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.001058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.001094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.001149 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.004658 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.020368 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025201 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.025381 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.044654 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.044802 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.050661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.050708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.050725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.050748 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.050765 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.066436 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.073628 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.073856 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.077327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.077387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.077405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.077434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.077453 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.082341 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.097487 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.114252 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.131702 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.147053 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.172987 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.180172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.180228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.180249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.180272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.180291 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.193949 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.210572 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.227249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.249801 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.272942 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.282888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.282956 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.282979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.283011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.283035 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.291855 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:17Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.367229 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.367661 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:17 crc kubenswrapper[4812]: E0218 16:30:17.367809 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:18.367776396 +0000 UTC m=+38.633387315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.386290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.386612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.386851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.387052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.387294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.468066 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:10:07.577421908 +0000 UTC Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.491669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.491764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.491788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.491821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.491846 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.596433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.596506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.596532 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.596569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.596597 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.700530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.700602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.700623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.700654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.700677 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.804591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.804697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.804718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.804753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.804780 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.909409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.909865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.910001 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.910174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:17 crc kubenswrapper[4812]: I0218 16:30:17.910376 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:17Z","lastTransitionTime":"2026-02-18T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.014343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.014413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.014435 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.014467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.014490 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.117252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.117307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.117327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.117352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.117370 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.221773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.221862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.221895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.221929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.221955 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.326207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.326285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.326307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.326335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.326355 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.381370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.381576 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.381672 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:20.381644924 +0000 UTC m=+40.647255843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.430057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.430200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.430229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.430267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.430296 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.469201 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:56:49.600562384 +0000 UTC Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.507993 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.508089 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.508026 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.508254 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.508360 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.508480 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.508611 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:18 crc kubenswrapper[4812]: E0218 16:30:18.508891 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.533998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.534042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.534056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.534135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.534157 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.638504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.638592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.638622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.638658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.638683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.742805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.742871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.742885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.742909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.742928 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.845683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.845754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.845773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.845801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.845822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.948951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.949017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.949035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.949062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:18 crc kubenswrapper[4812]: I0218 16:30:18.949081 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:18Z","lastTransitionTime":"2026-02-18T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.051599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.051629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.051638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.051652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.051662 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.154814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.154874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.154891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.154916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.154936 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.258462 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.258645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.258676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.258715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.258734 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.362998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.363146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.363174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.363211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.363235 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.467405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.467492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.467511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.467541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.467561 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.469763 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:20:07.860988544 +0000 UTC Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.571513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.571580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.571598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.571624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.571643 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.674889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.674959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.674983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.675014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.675040 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.778669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.778729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.778746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.778772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.778792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.881579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.881669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.881696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.881727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.881751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.984811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.984888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.984911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.984952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:19 crc kubenswrapper[4812]: I0218 16:30:19.984978 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:19Z","lastTransitionTime":"2026-02-18T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.088904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.088976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.088994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.089020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.089040 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.192501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.192581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.192605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.192641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.192664 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.295734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.295796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.295815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.295842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.295861 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.399395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.399501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.399529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.399560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.399582 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.408214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.408427 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.408525 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:24.40849795 +0000 UTC m=+44.674108889 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.470661 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:12:07.273366524 +0000 UTC Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.503685 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.503753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.503772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.503806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.503827 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.507929 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.508163 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.508316 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.508361 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.508469 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.508892 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.509092 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:20 crc kubenswrapper[4812]: E0218 16:30:20.509306 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.510506 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.533925 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.572039 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.620887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.620918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.620926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.620941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.620951 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.625637 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.652500 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.675089 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.693915 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.707294 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.719647 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.724262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.724302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.724317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.724339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.724354 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.733335 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.745483 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.758759 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.773008 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.788957 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.805913 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.822965 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.827191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.827236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.827249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.827265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.827279 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.843685 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.930950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.931196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.931218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.931245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.931265 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:20Z","lastTransitionTime":"2026-02-18T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.931702 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.934662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a"} Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.935238 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.958295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.978842 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:20 crc kubenswrapper[4812]: I0218 16:30:20.994382 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:20Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.014548 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.031809 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.034495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.034556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.034577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.034607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.034629 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.052709 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.075429 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.092938 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.108759 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.130820 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.138145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.138196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.138213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.138240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.138262 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.153636 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.176666 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.191499 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.222320 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241716 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.241840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.258404 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:21Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.346794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.347223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.347296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.347424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.347496 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.450596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.450665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.450684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.450714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.450736 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.470922 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:52:50.128774666 +0000 UTC Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.554318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.554397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.554422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.554452 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.554476 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.658046 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.658146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.658166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.658194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.658214 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.762649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.762730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.762793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.762823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.762842 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.866510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.866545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.866557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.866576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.866587 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.969042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.969655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.969853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.970007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:21 crc kubenswrapper[4812]: I0218 16:30:21.970173 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:21Z","lastTransitionTime":"2026-02-18T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.073964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.074018 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.074038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.074063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.074083 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.178169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.178231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.178249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.178276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.178295 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.282854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.282933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.282951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.282980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.282998 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.386599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.386683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.386702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.386731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.386751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.471850 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:30:28.464134806 +0000 UTC Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.490490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.490552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.490573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.490600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.490620 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.507906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.507948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.508019 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.508201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:22 crc kubenswrapper[4812]: E0218 16:30:22.508197 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:22 crc kubenswrapper[4812]: E0218 16:30:22.508435 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:22 crc kubenswrapper[4812]: E0218 16:30:22.508649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:22 crc kubenswrapper[4812]: E0218 16:30:22.508872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.594149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.594202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.594213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.594235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.594249 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.697872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.697933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.697946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.697968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.697983 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.801730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.801815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.801834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.801875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.801897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.905329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.905492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.905510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.905537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:22 crc kubenswrapper[4812]: I0218 16:30:22.905559 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:22Z","lastTransitionTime":"2026-02-18T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.008591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.008669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.008694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.008727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.008752 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.112007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.112052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.112065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.112084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.112102 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.215839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.215931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.215959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.215993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.216017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.319742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.319816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.319844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.319903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.319928 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.422195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.422249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.422262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.422280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.422293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.472720 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:33:45.318911939 +0000 UTC Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.525596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.525674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.525697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.525727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.525751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.631047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.631154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.631176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.631203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.631233 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.735713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.735812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.735836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.735863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.735881 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.839501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.839568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.839588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.839616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.839639 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.943023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.943086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.943133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.943159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:23 crc kubenswrapper[4812]: I0218 16:30:23.943176 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:23Z","lastTransitionTime":"2026-02-18T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.046803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.046859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.046877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.046906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.046929 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.151198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.151251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.151269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.151300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.151320 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.253898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.253946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.253963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.253989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.254007 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.358397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.358478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.358503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.358534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.358554 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.463208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.463283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.463306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.463338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.463360 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.468050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.468269 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.468373 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:32.468342637 +0000 UTC m=+52.733953586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.473972 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:33:44.577348995 +0000 UTC Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.507507 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.507610 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.507512 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.507690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.507744 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.507930 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.508030 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:24 crc kubenswrapper[4812]: E0218 16:30:24.508219 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.566924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.566998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.567017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.567041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.567062 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.670736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.670797 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.670828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.670856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.670878 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.774404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.774470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.774487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.774517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.774650 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.878333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.878420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.878444 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.878476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.878502 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.981970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.982039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.982057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.982089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:24 crc kubenswrapper[4812]: I0218 16:30:24.982154 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:24Z","lastTransitionTime":"2026-02-18T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.086003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.086067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.086085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.086139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.086158 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.189511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.189600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.189626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.189658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.189683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.292957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.293030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.293047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.293075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.293104 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.396917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.396994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.397015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.397044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.397066 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.475069 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:02:53.533033955 +0000 UTC Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.502815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.502884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.502906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.502936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.502957 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.606036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.606098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.606114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.606166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.606186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.709838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.709924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.709948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.709981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.710010 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.814048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.814159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.814189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.814225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.814250 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.917600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.917653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.917673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.917709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:25 crc kubenswrapper[4812]: I0218 16:30:25.917734 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:25Z","lastTransitionTime":"2026-02-18T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.021258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.021339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.021359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.021390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.021413 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.125451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.125525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.125546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.125574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.125594 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.229759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.229839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.229859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.229888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.229907 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.333724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.333823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.333848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.333887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.333918 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.442767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.442848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.442867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.442895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.442916 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.475514 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:07:11.248069004 +0000 UTC Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.507792 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.507796 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.508010 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:26 crc kubenswrapper[4812]: E0218 16:30:26.508239 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:26 crc kubenswrapper[4812]: E0218 16:30:26.508349 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:26 crc kubenswrapper[4812]: E0218 16:30:26.508575 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.508692 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:26 crc kubenswrapper[4812]: E0218 16:30:26.508828 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.546205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.546272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.546292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.546321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.546342 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.649583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.649684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.649703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.649736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.649758 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.753193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.753294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.753340 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.753384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.753412 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.857152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.857224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.857244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.857272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.857293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.960566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.960684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.960704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.960746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:26 crc kubenswrapper[4812]: I0218 16:30:26.960771 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:26Z","lastTransitionTime":"2026-02-18T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.064389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.064455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.064491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.064526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.064550 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.168044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.168150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.168171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.168200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.168222 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.205690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.205756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.205774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.205801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.205826 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.229671 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:27Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.235947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.236050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.236131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.236164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.236187 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.260766 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:27Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.265642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.265687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.265699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.265719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.265731 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.281687 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:27Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.286507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.286584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.286602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.286627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.286643 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.308244 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:27Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.315529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.315622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.315642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.315713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.315735 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.336288 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:27Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:27 crc kubenswrapper[4812]: E0218 16:30:27.336533 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.338645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.338721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.338744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.338777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.338799 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.442834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.442968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.443002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.443036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.443060 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.476237 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:54:34.341278025 +0000 UTC Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.547688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.547780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.547800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.547829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.547850 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.671970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.672043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.672062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.672092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.672143 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.775745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.775812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.775828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.775855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.775872 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.879429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.879498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.879515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.879540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.879558 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.983501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.983714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.983745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.983777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:27 crc kubenswrapper[4812]: I0218 16:30:27.983803 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:27Z","lastTransitionTime":"2026-02-18T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.086455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.086549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.086570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.086615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.086635 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.190251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.190302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.190312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.190330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.190341 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.293186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.293277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.293301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.293331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.293356 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.398157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.398233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.398253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.398283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.398308 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.477188 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:19:58.087946989 +0000 UTC Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.501454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.501506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.501525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.501552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.501572 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.507277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.507328 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.507414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.507429 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:28 crc kubenswrapper[4812]: E0218 16:30:28.507624 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:28 crc kubenswrapper[4812]: E0218 16:30:28.507810 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:28 crc kubenswrapper[4812]: E0218 16:30:28.507950 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:28 crc kubenswrapper[4812]: E0218 16:30:28.508032 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.577029 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.590701 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.603509 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.605223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.605282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.605301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.605332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.605352 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.638960 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.660592 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.681315 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.704308 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.710645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.710712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.710732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.710770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.710796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.723338 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.742644 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.758130 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.775892 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.789719 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.807600 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.813876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.813919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.813935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.813962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.813981 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.824317 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.837764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.860031 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.882439 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.903766 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:28Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.917665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.917742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.917762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.917793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:28 crc kubenswrapper[4812]: I0218 16:30:28.917815 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:28Z","lastTransitionTime":"2026-02-18T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.021490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.021564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.021589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.021621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.021648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.125459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.125516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.125534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.125559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.125578 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.228725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.228787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.228805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.228833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.228854 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.332629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.332676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.332687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.332708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.332722 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.436707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.436795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.436817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.436843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.436867 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.478295 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:29:07.976801161 +0000 UTC Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.542007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.542858 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.542898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.543719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.543819 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.647283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.647350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.647368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.647396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.647417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.750152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.750234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.750266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.750304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.750334 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.854127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.854190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.854208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.854234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.854251 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.958003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.958079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.958101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.958180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:29 crc kubenswrapper[4812]: I0218 16:30:29.958201 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:29Z","lastTransitionTime":"2026-02-18T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.062432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.062495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.062517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.062557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.062597 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.166888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.166963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.166981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.167007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.167024 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.270733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.270800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.270818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.270847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.270866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.374639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.374740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.374763 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.374795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.374815 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478491 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.478470 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:28:59.613914704 +0000 UTC Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.507876 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:30 crc kubenswrapper[4812]: E0218 16:30:30.508089 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.508188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:30 crc kubenswrapper[4812]: E0218 16:30:30.508383 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.508416 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.508428 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:30 crc kubenswrapper[4812]: E0218 16:30:30.508520 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:30 crc kubenswrapper[4812]: E0218 16:30:30.508697 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.510013 4812 scope.go:117] "RemoveContainer" containerID="1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.529626 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.554287 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.574987 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.582972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.583274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.583440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.583617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.583766 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.598046 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.615842 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.636014 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.656688 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.679420 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.686994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.687032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.687044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.687062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.687075 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.702263 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.723960 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.750536 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.772886 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.790522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.790589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.790609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.790666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.790688 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.791199 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.819627 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.842174 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.871889 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.890538 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:30Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.894131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.894192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.894206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.894227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.894242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.980241 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/1.log" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.984816 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538"} Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.986604 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.997409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.997460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.997485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.997517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:30 crc kubenswrapper[4812]: I0218 16:30:30.997544 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:30Z","lastTransitionTime":"2026-02-18T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.017448 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.048385 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.086251 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.103164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.103232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.103249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.103276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.103296 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.113482 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.137645 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.158021 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.174313 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.188484 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.203306 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.206330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.206395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.206412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.206434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.206451 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.215842 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.265815 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.285092 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.306161 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.309638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.309705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.309725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.309750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.309772 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.323289 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.341757 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.363547 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.378326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:31Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.412512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.412610 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.412637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.412677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.412707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.479623 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:53:50.839864509 +0000 UTC Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.515379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.515433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.515443 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.515462 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.515478 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.619891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.619935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.619946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.619967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.619980 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.723085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.723174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.723187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.723208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.723220 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.826035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.826148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.826171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.826203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.826221 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.930509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.930576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.930594 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.930624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.930643 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:31Z","lastTransitionTime":"2026-02-18T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.992875 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/2.log" Feb 18 16:30:31 crc kubenswrapper[4812]: I0218 16:30:31.999484 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/1.log" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.004632 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" exitCode=1 Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.004701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.004779 4812 scope.go:117] "RemoveContainer" containerID="1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.006032 4812 scope.go:117] "RemoveContainer" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.006373 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.031836 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.034774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.034829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.034842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.034864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.034879 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.058015 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.083463 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.105622 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.138469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.138535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.138558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.138585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.138603 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.139787 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c75af5093a7d70830731ef1c6e7054f548b996c8e968589a27633b111d465cd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"message\\\":\\\"andler 8 for removal\\\\nI0218 16:30:15.743446 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:30:15.743453 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:30:15.743492 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:30:15.743506 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:30:15.743513 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:30:15.743998 6279 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 16:30:15.744428 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 16:30:15.744452 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 16:30:15.744459 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 16:30:15.744470 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 16:30:15.744491 6279 factory.go:656] Stopping watch factory\\\\nI0218 16:30:15.744496 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 16:30:15.744510 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 16:30:15.744516 6279 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:30:15.744518 6279 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.165150 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.175863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.176022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.176072 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.176140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176295 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:31:04.176252446 +0000 UTC m=+84.441863385 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176305 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176344 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176403 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176433 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176460 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:31:04.17642506 +0000 UTC m=+84.442035979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176492 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176511 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:31:04.176487732 +0000 UTC m=+84.442098681 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.176675 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:31:04.176635475 +0000 UTC m=+84.442246574 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.184974 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.205855 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.228330 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.242361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.242430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.242457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.242492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.242520 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.250423 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.266377 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.278076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.278442 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.278506 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.278533 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.278690 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:31:04.278649417 +0000 UTC m=+84.544260356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.285076 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.302715 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.319469 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.341080 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.348560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.348664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.348692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.348725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.348761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.361794 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.376562 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:32Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.452546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.452601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.452613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.452632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.452646 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.480380 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:00:37.032424505 +0000 UTC Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.480712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.480939 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.481032 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:30:48.481003471 +0000 UTC m=+68.746614400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.507842 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.508085 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.508433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.508657 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.508817 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.508946 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.508820 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:32 crc kubenswrapper[4812]: E0218 16:30:32.509074 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.555754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.555814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.555836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.555863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.555885 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.659904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.659970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.659988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.660017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.660038 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.763362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.763781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.763931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.764042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.764188 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.868688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.868776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.868799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.869047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.869069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.974161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.974249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.974268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.974294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:32 crc kubenswrapper[4812]: I0218 16:30:32.974313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:32Z","lastTransitionTime":"2026-02-18T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.012815 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/2.log" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.020255 4812 scope.go:117] "RemoveContainer" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" Feb 18 16:30:33 crc kubenswrapper[4812]: E0218 16:30:33.020546 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.043155 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.065537 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.082972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.083044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.083068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.083159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.083215 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.094492 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.115365 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.140664 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.166980 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.187578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.187703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.187728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.187764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.187883 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.193922 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.217613 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.252270 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.280822 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.291336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.291402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.291421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.291448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.291466 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.303127 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.326892 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.356821 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.378011 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396333 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.396991 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.415701 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.431333 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:33Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.480911 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:52:42.161781587 +0000 UTC Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.500777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.500859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.500882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.500916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.500941 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.605168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.605254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.605284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.605318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.605343 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.709378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.709453 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.709524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.709561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.709587 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.814294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.814361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.814380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.814598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.814627 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.918246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.918329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.918349 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.918379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:33 crc kubenswrapper[4812]: I0218 16:30:33.918400 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:33Z","lastTransitionTime":"2026-02-18T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.021775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.021843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.021863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.021891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.021913 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.125574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.125651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.125680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.125716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.125738 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.231422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.231476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.231489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.231510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.231523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.335579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.335655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.335674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.335773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.335794 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.438567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.438641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.438668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.438701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.438723 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.481083 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:06:17.982208622 +0000 UTC Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.507181 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.507277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.507322 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.507199 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:34 crc kubenswrapper[4812]: E0218 16:30:34.507403 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:34 crc kubenswrapper[4812]: E0218 16:30:34.507661 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:34 crc kubenswrapper[4812]: E0218 16:30:34.507742 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:34 crc kubenswrapper[4812]: E0218 16:30:34.507838 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.543644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.543741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.543762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.543794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.543815 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.647646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.647719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.647739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.647770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.647792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.751836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.751930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.751962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.751991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.752013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.856051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.856153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.856174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.856202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.856224 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.960045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.960149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.960167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.960196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:34 crc kubenswrapper[4812]: I0218 16:30:34.960218 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:34Z","lastTransitionTime":"2026-02-18T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.064031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.064158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.064184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.064215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.064239 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.167965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.168039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.168058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.168088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.168140 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.272347 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.272432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.272457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.272499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.272523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.376792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.376895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.376921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.376950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.376971 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.480178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.480252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.480271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.480297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.480317 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.481424 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:28:52.842402711 +0000 UTC Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.583486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.583572 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.583599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.583636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.583661 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.687256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.687312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.687331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.687359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.687377 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.791266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.791339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.791366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.791438 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.791469 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.895189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.895264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.895289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.895327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.895355 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.999240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.999331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.999363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.999397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:35 crc kubenswrapper[4812]: I0218 16:30:35.999420 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:35Z","lastTransitionTime":"2026-02-18T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.103266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.103360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.103389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.103430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.103459 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.207650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.207746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.207767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.207896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.207952 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.312330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.312407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.312426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.312457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.312478 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.415645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.415715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.415734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.415761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.415781 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.481664 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:47:17.727507094 +0000 UTC Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.507562 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.507620 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.507579 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:36 crc kubenswrapper[4812]: E0218 16:30:36.507784 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.507854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:36 crc kubenswrapper[4812]: E0218 16:30:36.508046 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:36 crc kubenswrapper[4812]: E0218 16:30:36.508220 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:36 crc kubenswrapper[4812]: E0218 16:30:36.508401 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.518883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.518940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.518957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.518980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.518999 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.623457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.623539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.623571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.623617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.623637 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.726631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.726700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.726719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.726742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.726761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.830087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.830196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.830212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.830298 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.830318 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.934131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.934209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.934231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.934262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:36 crc kubenswrapper[4812]: I0218 16:30:36.934286 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:36Z","lastTransitionTime":"2026-02-18T16:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.038479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.038550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.038574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.038599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.038619 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.142481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.142523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.142533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.142546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.142556 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.245304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.245360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.245378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.245402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.245424 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.348690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.348761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.348785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.348814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.348835 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.417035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.417149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.417168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.417195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.417264 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.440247 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.448073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.448365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.448556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.448728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.448879 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.471430 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.477441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.477536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.477557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.477590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.477613 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.482758 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:46:52.738028927 +0000 UTC Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.499231 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.504445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.504497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.504517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.504543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.504561 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.525751 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.531616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.531667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.531687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.531712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.531731 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.553351 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: E0218 16:30:37.553731 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.557207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.557264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.557285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.557319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.557344 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.660698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.660748 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.660761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.660782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.660796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.764260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.764312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.764331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.764356 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.764375 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.868620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.868703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.868724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.868759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.868782 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.873722 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.898392 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.938894 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.971915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.972008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.972036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.972072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.972134 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:37Z","lastTransitionTime":"2026-02-18T16:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:37 crc kubenswrapper[4812]: I0218 16:30:37.987898 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:37Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.008723 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.023678 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.037575 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.052845 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.067229 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.075756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.075805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.075820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.075842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.075859 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.084256 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.101005 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.117150 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.140398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.161131 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.179566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.179624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.179636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.179655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.179668 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.181997 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.209291 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.234598 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.256474 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:38Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.285497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.285538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.285552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.285572 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.285588 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.388176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.388249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.388267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.388295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.388314 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.484752 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:26:49.436400451 +0000 UTC Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.491087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.491175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.491195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.491226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.491252 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.507644 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.507776 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.507890 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.507660 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:38 crc kubenswrapper[4812]: E0218 16:30:38.507904 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:38 crc kubenswrapper[4812]: E0218 16:30:38.508010 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:38 crc kubenswrapper[4812]: E0218 16:30:38.508200 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:38 crc kubenswrapper[4812]: E0218 16:30:38.508376 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.594621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.594700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.594724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.594758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.594784 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.698170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.698258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.698280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.698307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.698329 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.802584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.802667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.802692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.802724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.802749 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.907611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.907689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.907709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.907741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:38 crc kubenswrapper[4812]: I0218 16:30:38.907762 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:38Z","lastTransitionTime":"2026-02-18T16:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.011535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.011619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.011643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.011674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.011698 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.115515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.115580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.115604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.115631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.115657 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.219811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.219904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.219930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.219965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.219988 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.323889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.323967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.323985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.324015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.324037 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.427835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.427912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.427933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.427963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.427984 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.485235 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:36:52.841306233 +0000 UTC Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.533230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.533300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.533330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.533364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.533390 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.637474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.637534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.637553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.637579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.637598 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.741048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.741158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.741181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.741211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.741231 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.846879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.846950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.846969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.846996 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.847013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.950750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.950796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.950814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.950836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:39 crc kubenswrapper[4812]: I0218 16:30:39.950854 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:39Z","lastTransitionTime":"2026-02-18T16:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.054465 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.054563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.054590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.054625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.054655 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.158216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.158277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.158290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.158318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.158333 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.261392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.261448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.261467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.261496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.261517 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.364153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.364212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.364230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.364255 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.364274 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.467828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.467901 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.467920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.467946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.467967 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.486372 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:01:31.109588448 +0000 UTC Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.507348 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.507352 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.507353 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.507576 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:40 crc kubenswrapper[4812]: E0218 16:30:40.507767 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:40 crc kubenswrapper[4812]: E0218 16:30:40.507962 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:40 crc kubenswrapper[4812]: E0218 16:30:40.508276 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:40 crc kubenswrapper[4812]: E0218 16:30:40.508341 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.535734 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.553669 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.571876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.571951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.571970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.572001 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.572086 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.581130 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.602578 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.635052 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.661026 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.675644 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.675957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.676178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.676200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.676230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.676251 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.688791 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.704819 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.725238 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.743818 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.762952 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.780006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.780070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.780086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.780133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.780150 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.785032 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.801698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.827681 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.852291 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.873320 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:40Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.883707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.883783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.883802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.883828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.883846 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.987417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.987481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.987502 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.987532 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:40 crc kubenswrapper[4812]: I0218 16:30:40.987553 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:40Z","lastTransitionTime":"2026-02-18T16:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.097745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.097812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.097827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.097849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.097866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.202372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.202451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.202477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.202515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.202541 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.307068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.307162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.307181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.307209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.307230 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.411697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.411785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.411826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.411872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.411906 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.487076 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:11:14.965986698 +0000 UTC Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.517395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.517449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.517462 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.517484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.517500 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.620313 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.620365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.620380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.620399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.620414 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.723766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.724300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.724456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.724609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.724746 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.828216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.828281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.828307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.828344 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.828372 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.931850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.931907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.931921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.931942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:41 crc kubenswrapper[4812]: I0218 16:30:41.931955 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:41Z","lastTransitionTime":"2026-02-18T16:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.035243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.035306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.035325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.035352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.035373 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.139510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.139581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.139600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.139629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.139653 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.244058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.244168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.244187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.244216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.244248 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.348208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.348283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.348306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.348338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.348361 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.452397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.452486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.452511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.452542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.452563 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.487916 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:38:19.677040781 +0000 UTC Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.508810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.508954 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.508967 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:42 crc kubenswrapper[4812]: E0218 16:30:42.509189 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.509301 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:42 crc kubenswrapper[4812]: E0218 16:30:42.509462 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:42 crc kubenswrapper[4812]: E0218 16:30:42.510004 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:42 crc kubenswrapper[4812]: E0218 16:30:42.510114 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.555486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.555553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.555578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.555609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.555632 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.659625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.659700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.659722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.659752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.659773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.762038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.762128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.762148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.762173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.762192 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.865004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.865049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.865062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.865081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.865095 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.967688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.967738 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.967752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.967770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:42 crc kubenswrapper[4812]: I0218 16:30:42.967786 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:42Z","lastTransitionTime":"2026-02-18T16:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.071483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.071543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.071562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.071589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.071607 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.174882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.174930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.174945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.174968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.174988 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.278041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.278187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.278216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.278437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.278462 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.381645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.381680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.381689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.381705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.381715 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.484866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.484920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.484937 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.484962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.484981 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.488965 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:33:12.718151664 +0000 UTC Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.588652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.589299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.589624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.589852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.590001 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.693894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.693972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.693991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.694019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.694049 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.796764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.797016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.797262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.797449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.797591 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.900170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.900597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.900854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.901001 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:43 crc kubenswrapper[4812]: I0218 16:30:43.901158 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:43Z","lastTransitionTime":"2026-02-18T16:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.004424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.004490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.004511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.004538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.004559 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.107258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.107321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.107339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.107365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.107385 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.211172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.211251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.211269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.211299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.211325 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.314566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.314629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.314643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.314719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.314733 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.418121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.418181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.418196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.418222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.418243 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.489976 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:26:59.364904618 +0000 UTC Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.507735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.507778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.507803 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.507955 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:44 crc kubenswrapper[4812]: E0218 16:30:44.508150 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:44 crc kubenswrapper[4812]: E0218 16:30:44.508244 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:44 crc kubenswrapper[4812]: E0218 16:30:44.508478 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:44 crc kubenswrapper[4812]: E0218 16:30:44.508559 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.520764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.520807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.520818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.520836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.520849 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.624349 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.624909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.625135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.625402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.625572 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.728468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.728520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.728532 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.728552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.728570 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.831390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.831441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.831458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.831484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.831503 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.933907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.933950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.933959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.933975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:44 crc kubenswrapper[4812]: I0218 16:30:44.933986 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:44Z","lastTransitionTime":"2026-02-18T16:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.037235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.037291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.037307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.037326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.037339 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.139889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.139946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.139960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.139980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.139994 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.244729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.244805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.244824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.244853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.244872 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.348128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.348164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.348177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.348194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.348208 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.450556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.450632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.450658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.450693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.450719 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.490926 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:40:03.72770902 +0000 UTC Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.553074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.553140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.553153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.553174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.553188 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.654845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.654931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.654946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.654968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.654985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.757942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.758031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.758045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.758066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.758078 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.860637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.860689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.860708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.860733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.860750 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.964506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.964569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.964588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.964616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:45 crc kubenswrapper[4812]: I0218 16:30:45.964636 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:45Z","lastTransitionTime":"2026-02-18T16:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.068419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.068466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.068478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.068499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.068510 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.171868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.171954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.171982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.172021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.172050 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.275864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.275931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.275954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.275985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.276008 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.379359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.379431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.379448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.379476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.379493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.482034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.482090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.482128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.482146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.482158 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.491452 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:31:19.180399083 +0000 UTC Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.508058 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.508148 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.508165 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:46 crc kubenswrapper[4812]: E0218 16:30:46.508297 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.508413 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:46 crc kubenswrapper[4812]: E0218 16:30:46.508588 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:46 crc kubenswrapper[4812]: E0218 16:30:46.508655 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:46 crc kubenswrapper[4812]: E0218 16:30:46.508769 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.585350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.585433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.585454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.585480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.585499 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.689797 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.689877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.689900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.689928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.689946 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.793209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.793272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.793294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.793321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.793340 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.895972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.896018 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.896031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.896054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:46 crc kubenswrapper[4812]: I0218 16:30:46.896069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:46Z","lastTransitionTime":"2026-02-18T16:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.000963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.001039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.001051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.001075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.001115 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.104913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.104985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.105008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.105037 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.105055 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.208042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.208150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.208174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.208203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.208224 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.310853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.310906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.310918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.310938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.310949 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.414290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.414341 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.414351 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.414368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.414379 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.491813 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:35:11.942919329 +0000 UTC Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.508267 4812 scope.go:117] "RemoveContainer" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.508469 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.517655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.517705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.517716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.517734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.517750 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.621400 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.621449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.621460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.621480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.621493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.704013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.704059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.704074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.704110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.704127 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.719122 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:47Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.723196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.723256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.723281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.723317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.723353 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.741727 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:47Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.746971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.747034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.747054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.747086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.747137 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.764192 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:47Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.768573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.768621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.768637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.768655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.768666 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.785208 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:47Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.790606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.790653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.790664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.790682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.790693 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.808880 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:47Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:47 crc kubenswrapper[4812]: E0218 16:30:47.809005 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.810962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.811016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.811032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.811054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.811069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.914733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.914841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.914860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.914879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:47 crc kubenswrapper[4812]: I0218 16:30:47.914892 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:47Z","lastTransitionTime":"2026-02-18T16:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.018285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.018334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.018350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.018370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.018384 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.120971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.121023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.121042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.121070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.121087 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.232177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.232795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.232821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.232849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.232866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.336023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.336080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.336093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.336158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.336170 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.439021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.439071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.439084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.439131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.439143 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.492004 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:26:18.530947091 +0000 UTC Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.508436 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.508863 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.509399 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.509572 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.509405 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.509701 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.509818 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.509975 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.542522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.542585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.542598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.542620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.542635 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.577772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.578020 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:48 crc kubenswrapper[4812]: E0218 16:30:48.578199 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:31:20.57814344 +0000 UTC m=+100.843754529 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.645526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.645576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.645589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.645607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.645617 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.748958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.749010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.749024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.749043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.749058 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.853730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.853802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.853833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.853870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.853901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.957232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.957290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.957302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.957323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:48 crc kubenswrapper[4812]: I0218 16:30:48.957334 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:48Z","lastTransitionTime":"2026-02-18T16:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.060321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.060373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.060384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.060404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.060417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.163539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.163594 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.163608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.163631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.163645 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.266648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.266702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.266712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.266734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.266749 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.370652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.370701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.370711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.370728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.370739 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.474188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.474230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.474240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.474261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.474275 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.492574 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:35:45.113800993 +0000 UTC Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.577950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.578015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.578032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.578055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.578071 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.681071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.681142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.681155 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.681173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.681187 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.786310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.786366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.786384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.786410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.786429 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.889983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.890037 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.890053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.890079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.890093 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.994021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.994074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.994123 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.994151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:49 crc kubenswrapper[4812]: I0218 16:30:49.994170 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:49Z","lastTransitionTime":"2026-02-18T16:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.086893 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/0.log" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.086969 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf2b75a7-be08-4a51-b100-9a75359bbd18" containerID="796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8" exitCode=1 Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.087021 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerDied","Data":"796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.087718 4812 scope.go:117] "RemoveContainer" containerID="796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.097003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.097065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.097079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.097126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.097144 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.106075 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.123502 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.138819 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.150840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.168209 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.185800 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.200896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.200947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.200961 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.200982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.200993 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.202620 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.217003 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.248605 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.268168 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.282884 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.300642 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.322636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.322695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.322709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.322735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.322751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.324257 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.341268 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.355012 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.373399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.391295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.425431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.425488 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.425503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.425525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.425538 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.492997 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:21:45.107895512 +0000 UTC Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.507668 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.507795 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:50 crc kubenswrapper[4812]: E0218 16:30:50.507820 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.507926 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.507931 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:50 crc kubenswrapper[4812]: E0218 16:30:50.508266 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:50 crc kubenswrapper[4812]: E0218 16:30:50.508402 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:50 crc kubenswrapper[4812]: E0218 16:30:50.508672 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.529480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.529699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.529823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.529949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.530080 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.530525 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.549022 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.564497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.578540 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.598624 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.616045 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.632912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.632992 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.633327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.633508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.633543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.633664 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.652335 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.676951 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.697190 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.712760 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.728152 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.737137 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.737173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.737187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.737207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.737222 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.749854 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.766697 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.781918 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.797498 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.815410 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:50Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.840140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.840219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.840233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.840284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.840301 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.942688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.942756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.942774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.942800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:50 crc kubenswrapper[4812]: I0218 16:30:50.942821 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:50Z","lastTransitionTime":"2026-02-18T16:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.046870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.047259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.047463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.047670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.047799 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.095426 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/0.log" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.095498 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerStarted","Data":"ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.112684 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.136254 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.149612 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.151238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.151416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.151534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.151650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.151765 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.166414 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.189599 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.212034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.234887 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255379 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.255922 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.278718 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.295881 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.307078 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.321049 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.334352 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.346399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359165 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.359777 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.372871 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.392274 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:51Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.462430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.462481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.462495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.462513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.462524 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.493879 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:20:40.638317741 +0000 UTC Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.565294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.565367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.565382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.565404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.565418 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.669034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.669578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.669720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.669802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.669868 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.772983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.773315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.773424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.773499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.773582 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.877323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.877383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.877397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.877421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.877444 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.980461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.980860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.981034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.981264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:51 crc kubenswrapper[4812]: I0218 16:30:51.981398 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:51Z","lastTransitionTime":"2026-02-18T16:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.084243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.084284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.084296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.084317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.084329 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.187720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.187760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.187769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.187786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.187798 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.291558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.291608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.291618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.291638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.291649 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.395088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.395265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.395297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.395335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.395363 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.494736 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:45:09.735346498 +0000 UTC Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.498138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.498185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.498196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.498214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.498228 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.507340 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.507486 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:52 crc kubenswrapper[4812]: E0218 16:30:52.507629 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.507665 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.507664 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:52 crc kubenswrapper[4812]: E0218 16:30:52.507837 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:52 crc kubenswrapper[4812]: E0218 16:30:52.507921 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:52 crc kubenswrapper[4812]: E0218 16:30:52.508055 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.525807 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.601195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.601255 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.601274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.601319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.601340 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.705359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.705445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.705472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.705510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.705534 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.808003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.808057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.808070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.808089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.808123 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.910575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.910626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.910639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.910657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:52 crc kubenswrapper[4812]: I0218 16:30:52.910670 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:52Z","lastTransitionTime":"2026-02-18T16:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.013593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.013660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.013684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.014185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.014227 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.117416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.117466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.117479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.117498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.117531 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.220294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.220361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.220375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.220395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.220412 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.324078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.324146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.324160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.324177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.324189 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.427156 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.427245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.427279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.427315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.427343 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.494922 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:55:31.664967585 +0000 UTC Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.532989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.533039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.533057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.533081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.533119 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.636491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.636537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.636548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.636569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.636581 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.740463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.740536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.740555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.740584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.740608 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.844537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.844605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.844627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.844658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.844678 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.948512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.948639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.948669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.948704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:53 crc kubenswrapper[4812]: I0218 16:30:53.948732 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:53Z","lastTransitionTime":"2026-02-18T16:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.052117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.052184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.052200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.052222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.052240 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.154895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.154946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.154959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.154978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.154992 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.258335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.258393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.258410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.258432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.258448 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.361672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.361744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.361762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.361787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.361804 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.465080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.465176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.465195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.465223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.465241 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.495847 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:23:23.74986088 +0000 UTC Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.507396 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.507432 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.507560 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:54 crc kubenswrapper[4812]: E0218 16:30:54.507624 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:54 crc kubenswrapper[4812]: E0218 16:30:54.507731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:54 crc kubenswrapper[4812]: E0218 16:30:54.507863 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.507981 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:54 crc kubenswrapper[4812]: E0218 16:30:54.508119 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.568023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.568089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.568130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.568154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.568171 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.671483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.671554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.671571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.671601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.671619 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.775133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.775189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.775202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.775220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.775231 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.878316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.878415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.878446 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.878482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.878511 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.981050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.981624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.981813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.981997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:54 crc kubenswrapper[4812]: I0218 16:30:54.982232 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:54Z","lastTransitionTime":"2026-02-18T16:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.086190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.086253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.086272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.086300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.086319 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.190211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.190287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.190297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.190319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.190330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.293661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.293845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.293870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.293898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.293918 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.397952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.398003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.398020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.398043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.398060 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.496999 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:28:09.858924339 +0000 UTC Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.501882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.501940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.501963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.501992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.502013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.604770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.604874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.604898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.604925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.604947 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.708019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.708457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.708699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.709053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.709302 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.814044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.814148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.814171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.814199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.814220 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.918060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.918170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.918189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.918218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:55 crc kubenswrapper[4812]: I0218 16:30:55.918240 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:55Z","lastTransitionTime":"2026-02-18T16:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.021816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.021869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.021884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.021906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.021922 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.125701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.125760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.125778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.125803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.125822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.229686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.229756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.229798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.229835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.229859 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.332631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.332740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.332765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.332797 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.332823 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.436574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.436660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.436680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.436715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.436739 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.497501 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:55:58.723119331 +0000 UTC Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.507948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.508053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.508214 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:56 crc kubenswrapper[4812]: E0218 16:30:56.508264 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.508314 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:56 crc kubenswrapper[4812]: E0218 16:30:56.508518 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:56 crc kubenswrapper[4812]: E0218 16:30:56.508734 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:56 crc kubenswrapper[4812]: E0218 16:30:56.508838 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.540195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.540260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.540279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.540309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.540330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.643332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.643408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.643428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.643456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.643481 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.746585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.746632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.746642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.746660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.746672 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.850175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.850243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.850264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.850292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.850313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.953755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.954177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.954333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.954481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:56 crc kubenswrapper[4812]: I0218 16:30:56.954618 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:56Z","lastTransitionTime":"2026-02-18T16:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.062343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.062397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.062407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.062424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.062436 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.165369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.165899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.166164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.166386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.166582 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.270056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.270637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.270796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.270896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.270991 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.375274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.375337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.375355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.375387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.375407 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.479099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.479175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.479190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.479210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.479228 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.497621 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:52:08.025604427 +0000 UTC Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.582419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.582841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.582939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.583055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.583194 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.687373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.687448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.687469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.687499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.687520 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.792029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.792123 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.792149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.792180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.792203 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.895677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.895735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.895753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.895775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.895792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.999837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:57 crc kubenswrapper[4812]: I0218 16:30:57.999896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:57.999917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:57.999949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:57.999969 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:57Z","lastTransitionTime":"2026-02-18T16:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.105602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.105656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.105669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.105691 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.105707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.153658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.153752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.153776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.153809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.153837 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.172579 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:58Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.177911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.178034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.178067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.178133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.178163 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.198738 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:58Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.204485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.204563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.204576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.204596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.204637 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.224290 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:58Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.229927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.230006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.230030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.230067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.230093 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.250064 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:58Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.256625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.256680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.256699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.256727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.256747 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.275446 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:30:58Z is after 2025-08-24T17:21:41Z" Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.275676 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.278498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.278564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.278588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.278620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.278641 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.382378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.382415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.382424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.382442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.382454 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.485336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.485408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.485430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.485464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.485489 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.497781 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:45:42.42247665 +0000 UTC Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.507420 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.507471 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.507432 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.507618 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.507647 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.507947 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.508270 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:30:58 crc kubenswrapper[4812]: E0218 16:30:58.508327 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.588749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.588808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.588827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.588853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.588880 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.693022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.693126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.693148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.693179 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.693200 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.797237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.797750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.797941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.798238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.798431 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.901550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.901626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.901645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.901673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:58 crc kubenswrapper[4812]: I0218 16:30:58.901696 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:58Z","lastTransitionTime":"2026-02-18T16:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.006366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.006440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.006459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.006490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.006510 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.110288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.110367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.110395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.110430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.110461 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.215251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.215356 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.215383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.215420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.215464 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.319363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.319472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.319497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.319534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.319592 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.423662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.423746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.423771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.423804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.423839 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.497967 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:01:43.842116319 +0000 UTC Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.528063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.528243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.528391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.528434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.528493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.637035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.637203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.637231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.637269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.637311 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.741428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.741502 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.741526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.741563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.741588 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.845543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.845596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.845609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.845629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.845644 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.950676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.950736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.950751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.950776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:30:59 crc kubenswrapper[4812]: I0218 16:30:59.950797 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:30:59Z","lastTransitionTime":"2026-02-18T16:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.053310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.053358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.053368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.053384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.053397 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.157800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.157889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.157911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.157944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.157967 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.260831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.260892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.260909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.260934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.260951 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.363533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.363575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.363588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.363607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.363621 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.466951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.467011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.467024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.467050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.467064 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.498462 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:46:27.252484619 +0000 UTC Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.508345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.508452 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:00 crc kubenswrapper[4812]: E0218 16:31:00.508573 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.508787 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:00 crc kubenswrapper[4812]: E0218 16:31:00.508874 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:00 crc kubenswrapper[4812]: E0218 16:31:00.509058 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.509287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:00 crc kubenswrapper[4812]: E0218 16:31:00.509473 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.528991 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.555901 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.570944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.571077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.571168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.571211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.571240 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.577769 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.596441 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.620371 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.638568 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.656258 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.672057 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.676097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.676157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.676171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.676193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.676209 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.688676 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19733a95-c245-4229-b484-68859e9debe0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.710605 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.744481 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.775582 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.778856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.778919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.778944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.778980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.779007 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.795751 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.815417 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.832137 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.850434 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.866873 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.882573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.882622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.882635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.882655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.882670 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.884531 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:00Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.987032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.987091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.987146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.987173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:00 crc kubenswrapper[4812]: I0218 16:31:00.987196 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:00Z","lastTransitionTime":"2026-02-18T16:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.090317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.090365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.090380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.090403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.090422 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.192837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.192897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.192909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.192930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.192943 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.295781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.295840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.295851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.295870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.295886 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.399352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.399434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.399451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.399477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.399494 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.498832 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:48:45.641128769 +0000 UTC Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.502167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.502200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.502214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.502232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.502242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.605412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.605483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.605512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.605546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.605569 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.709136 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.709216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.709235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.709267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.709295 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.813650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.813758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.813809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.813832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.813850 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.916880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.916932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.916943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.916966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:01 crc kubenswrapper[4812]: I0218 16:31:01.916980 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:01Z","lastTransitionTime":"2026-02-18T16:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.020629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.020680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.020690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.020708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.020722 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.124699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.124799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.125006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.125043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.125065 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.228594 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.228679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.228701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.228731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.228756 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.334425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.334570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.334646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.334833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.334916 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.439136 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.439215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.439235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.439273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.439293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.499212 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:35:09.569110987 +0000 UTC Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.508003 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.508068 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.508150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:02 crc kubenswrapper[4812]: E0218 16:31:02.508346 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.508493 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:02 crc kubenswrapper[4812]: E0218 16:31:02.508807 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:02 crc kubenswrapper[4812]: E0218 16:31:02.509496 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:02 crc kubenswrapper[4812]: E0218 16:31:02.509654 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.510169 4812 scope.go:117] "RemoveContainer" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.551013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.551094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.551152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.551188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.551213 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.654772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.655241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.655254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.655276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.655293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.759511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.759573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.759595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.759622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.759640 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.864556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.864619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.864639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.864667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.864687 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.978303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.978345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.978357 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.978377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:02 crc kubenswrapper[4812]: I0218 16:31:02.978392 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:02Z","lastTransitionTime":"2026-02-18T16:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.081332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.081372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.081399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.081420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.081436 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.144362 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/2.log" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.148034 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.149387 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.166326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19733a95-c245-4229-b484-68859e9debe0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.184656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.184697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.184713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.184735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.184755 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.186837 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.229069 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:31:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.255074 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.271250 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.288078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.288163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.288180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.288205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.288226 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.290340 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.311775 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.330845 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.347963 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.365400 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.384489 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.391120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.391181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.391192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.391210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.391225 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.401963 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.417058 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.437506 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.452954 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.472020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.487764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.494091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.494166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.494179 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.494198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.494212 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.499779 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:07:55.071180967 +0000 UTC Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.511067 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:03Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.596860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.596899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.596912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.596930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.596947 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.700653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.700724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.700743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.700768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.700788 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.803433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.803483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.803496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.803516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.803532 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.907053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.907138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.907152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.907181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:03 crc kubenswrapper[4812]: I0218 16:31:03.907198 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:03Z","lastTransitionTime":"2026-02-18T16:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.010930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.011003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.011021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.011049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.011083 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.114921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.114999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.115044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.115084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.115146 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.155554 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/3.log" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.157369 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/2.log" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.162409 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" exitCode=1 Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.162460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.162515 4812 scope.go:117] "RemoveContainer" containerID="3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.163958 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.164346 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.191864 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.196890 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197165 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:08.197127006 +0000 UTC m=+148.462737925 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.197359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.197431 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.197461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197631 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197669 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197689 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197705 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197739 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.197765 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:08.197721889 +0000 UTC m=+148.463332978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.198190 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:08.198140048 +0000 UTC m=+148.463750997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.198228 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:08.19821242 +0000 UTC m=+148.463823359 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.212212 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.218819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.218872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.218893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.218919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.218941 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.230774 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.250160 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.269389 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.289032 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.299018 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.299616 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.299688 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.299720 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.299854 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:08.299817292 +0000 UTC m=+148.565428231 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322298 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.322687 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.345067 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.378800 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e19bd0e24187807b65ac1907d7e30955f2b21047dd15013189f55c17c148538\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:31Z\\\",\\\"message\\\":\\\"onfig-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 16:30:31.606270 6498 services_controller.go:360] Finished syncing service image-registry on namespace openshift-image-registry for network=default : 1.036965ms\\\\nI0218 16:30:31.606431 6498 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606447 6498 services_controller.go:453] Built service openshift-machine-config-operator/machine-config-operator template LB for network=default: []services.LB{}\\\\nI0218 16:30:31.606416 6498 services_controller.go:451] Built service openshift-kube-scheduler/scheduler cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TC\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:31:03Z\\\",\\\"message\\\":\\\"208] Removed *v1.Node event handler 7\\\\nI0218 16:31:03.525495 6934 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 16:31:03.525549 6934 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 16:31:03.525502 6934 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:31:03.525518 6934 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:31:03.525606 6934 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 16:31:03.525641 6934 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:31:03.525664 6934 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 16:31:03.525669 6934 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:31:03.525740 6934 factory.go:656] Stopping watch factory\\\\nI0218 16:31:03.525745 6934 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:31:03.525806 6934 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:31:03.525750 6934 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 16:31:03.525924 6934 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 16:31:03.525973 6934 ovnkube.go:599] Stopped ovnkube\\\\nI0218 16:31:03.526010 6934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 16:31:03.526132 6934 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:31:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.406532 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.425892 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.426190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.426254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.426271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.426297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.426314 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.443060 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19733a95-c245-4229-b484-68859e9debe0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.458872 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.474825 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.491573 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.500334 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 04:47:56.181725663 +0000 UTC Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.507748 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.507939 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.508058 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.508184 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.508290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.508394 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.508483 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:04 crc kubenswrapper[4812]: E0218 16:31:04.508584 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.512382 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.529583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.529645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.529663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.529689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.529707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.530440 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.551514 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:04Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.633484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.633546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.633569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.633593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.633610 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.737199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.737280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.737302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.737331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.737356 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.840972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.841042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.841064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.841092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.841155 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.945309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.945409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.945430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.945458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:04 crc kubenswrapper[4812]: I0218 16:31:04.945479 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:04Z","lastTransitionTime":"2026-02-18T16:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.049040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.049161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.049183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.049214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.049239 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.153185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.153267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.153280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.153301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.153316 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.170324 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/3.log" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.175964 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:05 crc kubenswrapper[4812]: E0218 16:31:05.176258 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.199837 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.218216 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.241416 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.257278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.257337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.257355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.257380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.257398 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.261730 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.281079 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.298370 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.319346 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.339526 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.358042 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.360666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.360703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.360716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.360735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.360749 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.376246 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.402295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.464167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.464226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.464240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.464261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.464613 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.469778 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.484868 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.497219 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19733a95-c245-4229-b484-68859e9debe0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.501269 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:34:47.778963233 +0000 UTC Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.513460 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.548669 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:31:03Z\\\",\\\"message\\\":\\\"208] Removed *v1.Node event handler 7\\\\nI0218 16:31:03.525495 6934 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 16:31:03.525549 6934 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 16:31:03.525502 6934 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:31:03.525518 6934 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:31:03.525606 6934 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 16:31:03.525641 6934 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:31:03.525664 6934 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 16:31:03.525669 6934 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:31:03.525740 6934 factory.go:656] Stopping watch factory\\\\nI0218 16:31:03.525745 6934 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:31:03.525806 6934 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:31:03.525750 6934 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 16:31:03.525924 6934 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 16:31:03.525973 6934 ovnkube.go:599] Stopped ovnkube\\\\nI0218 16:31:03.526010 6934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 16:31:03.526132 6934 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.569840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.570405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.570418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.570437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.570448 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.581603 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.600471 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:05Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.675003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.675079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.675131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.675164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.675186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.779569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.779647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.779667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.779696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.779718 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.883300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.883408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.883427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.883457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.883482 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.985941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.986000 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.986021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.986050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:05 crc kubenswrapper[4812]: I0218 16:31:05.986069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:05Z","lastTransitionTime":"2026-02-18T16:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.090463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.090549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.090576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.090609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.090633 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.194581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.194644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.194655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.194677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.194690 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.298754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.298815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.298841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.298875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.298894 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.401802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.401863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.401883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.401911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.401934 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.503028 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:41:16.459746325 +0000 UTC Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.505846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.505887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.505899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.505916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.505926 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.507664 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:06 crc kubenswrapper[4812]: E0218 16:31:06.507872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.508184 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:06 crc kubenswrapper[4812]: E0218 16:31:06.508295 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.508529 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:06 crc kubenswrapper[4812]: E0218 16:31:06.508635 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.508960 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:06 crc kubenswrapper[4812]: E0218 16:31:06.509076 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.609714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.609770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.609783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.609808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.609824 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.713282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.713357 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.713373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.713398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.713412 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.817638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.817717 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.817737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.817768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.817791 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.922203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.922267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.922284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.922310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:06 crc kubenswrapper[4812]: I0218 16:31:06.922327 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:06Z","lastTransitionTime":"2026-02-18T16:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.026936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.027024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.027043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.027074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.027131 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.130499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.130582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.130601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.130626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.130649 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.234418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.234493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.234513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.234548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.234570 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.338046 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.338172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.338203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.338236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.338257 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.443378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.443460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.443480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.443510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.443543 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.503539 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:22:10.327210257 +0000 UTC Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.546192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.546250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.546263 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.546285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.546302 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.649247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.649297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.649315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.649345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.649371 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.753327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.753395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.753421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.753450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.753471 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.857743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.857815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.857840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.857874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.857900 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.961492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.961561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.961585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.961615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:07 crc kubenswrapper[4812]: I0218 16:31:07.961636 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:07Z","lastTransitionTime":"2026-02-18T16:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.065489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.065561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.065583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.065611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.065631 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.168425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.168518 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.168537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.168568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.168590 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.273524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.273590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.273607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.273632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.273652 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.345609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.346193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.346212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.346244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.346262 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.367351 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.379465 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.379574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.379596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.379632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.379655 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.401849 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.408063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.408351 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.408501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.408660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.408840 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.429666 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.434882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.434935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.434954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.434980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.434998 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.454901 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.460231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.460287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.460306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.460334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.460356 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.480890 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:08Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.481147 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.483815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.483902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.483926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.483960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.483984 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.504644 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:13:05.165335462 +0000 UTC Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.508185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.508274 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.508298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.508210 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.508475 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.508656 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.508810 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:08 crc kubenswrapper[4812]: E0218 16:31:08.509163 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.587404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.587465 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.587482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.587510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.587526 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.690456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.690506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.690520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.690542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.690553 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.794164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.794233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.794254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.794278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.794294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.897806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.897947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.897978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.898013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:08 crc kubenswrapper[4812]: I0218 16:31:08.898037 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:08Z","lastTransitionTime":"2026-02-18T16:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.002635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.003023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.003165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.003273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.003377 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.107002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.107054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.107064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.107082 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.107094 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.211191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.211246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.211260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.211283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.211301 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.315035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.315137 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.315158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.315185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.315205 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.417975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.418022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.418034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.418058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.418070 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.505797 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:01:41.610914269 +0000 UTC Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.521295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.521370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.521396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.521429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.521453 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.625399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.625461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.625479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.625504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.625523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.729186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.729232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.729244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.729261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.729273 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.832268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.832320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.832330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.832349 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.832362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.935999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.936149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.936178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.936219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:09 crc kubenswrapper[4812]: I0218 16:31:09.936244 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:09Z","lastTransitionTime":"2026-02-18T16:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.039249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.039329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.039346 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.039372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.039390 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.142457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.142515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.142530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.142552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.142567 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.245207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.245294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.245322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.245353 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.245376 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.349224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.349271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.349281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.349299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.349311 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.453794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.453860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.453879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.453907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.453929 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.506487 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:51:30.66545839 +0000 UTC Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.507903 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.507986 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.507945 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.507912 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:10 crc kubenswrapper[4812]: E0218 16:31:10.508149 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:10 crc kubenswrapper[4812]: E0218 16:31:10.508366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:10 crc kubenswrapper[4812]: E0218 16:31:10.509392 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:10 crc kubenswrapper[4812]: E0218 16:31:10.509714 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.547220 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c8bd0ec-00c8-4cc8-a689-073a151689d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:31:03Z\\\",\\\"message\\\":\\\"208] Removed *v1.Node event handler 7\\\\nI0218 16:31:03.525495 6934 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 16:31:03.525549 6934 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 16:31:03.525502 6934 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 16:31:03.525518 6934 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 16:31:03.525606 6934 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 16:31:03.525641 6934 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 16:31:03.525664 6934 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 16:31:03.525669 6934 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 16:31:03.525740 6934 factory.go:656] Stopping watch factory\\\\nI0218 16:31:03.525745 6934 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 16:31:03.525806 6934 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 16:31:03.525750 6934 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 16:31:03.525924 6934 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 16:31:03.525973 6934 ovnkube.go:599] Stopped ovnkube\\\\nI0218 16:31:03.526010 6934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 16:31:03.526132 6934 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:31:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xrqnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v49jp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.557944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.557978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.557992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.558011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.558025 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.566591 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d93bfc1-eb9b-4ca5-b9b4-83b5ebf01696\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7af8b9500bee01819a03c60d5d3d08cd4a57c9914ccc671ffd44ba3eda314b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af134a44644bda96d9550b91fa6232651da8768b374224ee174b19c40fcad09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea42aa268e398a526977ac5e9ebe93ebdee1dca0fda720f18371aee0895c692\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c758c98e915e2be08d77b86fc43e646db1f5bb26d8b0af5b6cf8bf8723c430bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ae9969fd75ccc2a3ebf7f1ef1c25972047b3a55a1bd8eb716e94b3ce4d4d5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db91e84c596d10c21c2a5c9d9dcd9f4d9751ef7f8d380f3a7c27aff0107a11ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093ea9573a01420be3c43bfd13b2af2f4b13feeb05c9da3e6762730f95162d31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:30:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6fr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mfnkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.581707 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca6eeeea-6618-4c5c-a451-b1b63009ea1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8171a371f41dff768857871c0c65074652353e3f0aacd2eb0d46b552019e9952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5aa3aaeadf29cbd101e15df8a07c1baf32a07720a68b41fa91d7e7f2c15def29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57zgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rdcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.599573 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19733a95-c245-4229-b484-68859e9debe0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8c7f98e8060f7b1839a22254035590e73e2dfa2f83f9e71fc351c289e9f676d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cafd711aef77001b341017c142aa5faccbedc60fa5faab908ede0287cd7e0e3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.614912 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.628614 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a55ca680b53e3cd6e6cc263ab5827dbc747150b239c7457391dfc06bdf6f8684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.643475 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-962hh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf2d986a-6ff1-4ee6-9dd4-939aa0866efc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50d46f61e52edfe510dd9769e7931dab42b304d1a55dc02601450659f42d9fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jgcg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-962hh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.659138 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc4da39-1fda-4604-a089-b90b684c8a46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd8771ddb66ed5862a9844fea56e179ec5e0deea1168f7284483dc0ac3d9ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6lmmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hhkxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.660146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.660186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.660201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.660223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.660241 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.672765 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"713f6ad5-53d1-453f-a193-e8ab26e31b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhbjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5cqfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.687743 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"907b7128-3246-4f3f-a89e-19e4654b98f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb07f6039bda7b7a52142e7089b86300b1d349c45b0fb25c70bc73e8d2f06c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5aac947200c4bb71d9b3d63b3fc757a250ee66d886f602ef6d708efed98484\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://099b04f329fa97c5fca83e5a19cf3fc37e99451d764c19d6ad71430fa10dce9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.701545 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcdd86d2fa767e8cbd1200df4e5fd5efb20349323e713dd636f7a2efa1c3f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03f52d16dc3219f7dfe1957434c10237e896b7f33ceba361f71f2aaf4bf10880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.720595 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.736757 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.753062 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhqsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc358ff-525d-49c3-b049-35d6ffea063f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27e3d1539ef76f4cab6ebc7edc5df39fb3e3a45003b7f446240c3ee9b5a906a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7rbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhqsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.767020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.767452 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.767591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.767735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.767856 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.769725 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29ce396c-ebe2-45a4-9717-e6fd10beb860\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bf35a89f69e1c1584c08555db9645ad515d3563f0d6a9ced2681d90a125f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117493a1883dd560162e9f321d3c97ec21ca5d17dc34ae28dbd2e091aa8363be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac0e8cd508651a225fd83de660d0faa5d6f556f8883df6dd7e06624e07857082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://518c262e4410fa430a5bde67e0f527d4551af20113d78652d87d9993c516a8a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.786757 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-prrcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2b75a7-be08-4a51-b100-9a75359bbd18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T16:30:49Z\\\",\\\"message\\\":\\\"2026-02-18T16:30:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5\\\\n2026-02-18T16:30:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb1f5c67-928e-436f-bcc4-b48b83322bf5 to /host/opt/cni/bin/\\\\n2026-02-18T16:30:04Z [verbose] multus-daemon started\\\\n2026-02-18T16:30:04Z [verbose] Readiness Indicator file check\\\\n2026-02-18T16:30:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:30:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmz76\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-prrcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.815564 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56631bd7-1e79-4a24-ab57-7774b75f8faa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T16:30:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 16:29:55.051734 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 16:29:55.053016 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3515714749/tls.crt::/tmp/serving-cert-3515714749/tls.key\\\\\\\"\\\\nI0218 16:30:00.557353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 16:30:00.559777 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 16:30:00.559791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 16:30:00.559816 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 16:30:00.559821 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 16:30:00.565049 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 16:30:00.565087 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 16:30:00.565119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 16:30:00.565122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 16:30:00.565126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 16:30:00.565130 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 16:30:00.565235 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 16:30:00.572882 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:29:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T16:29:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T16:29:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.835307 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T16:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3e570600a5035af98fad4acffee48f2ee1c1818542259a37bc581a6c7fd829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T16:30:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:10Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.872043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.872510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.872695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.872859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.873015 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.977002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.977078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.977131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.977173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:10 crc kubenswrapper[4812]: I0218 16:31:10.977200 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:10Z","lastTransitionTime":"2026-02-18T16:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.080518 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.080612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.080640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.080673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.080697 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.183820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.183871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.183882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.183902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.183917 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.286503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.286556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.286570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.286591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.286605 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.390895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.390973 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.390994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.391029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.391052 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.494380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.494443 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.494462 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.494491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.494517 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.506953 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:42:26.912231498 +0000 UTC Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.599061 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.599169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.599181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.599202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.599217 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.703238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.703324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.703343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.703403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.703423 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.807191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.807270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.807296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.807332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.807360 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.913510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.913574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.913593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.913618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:11 crc kubenswrapper[4812]: I0218 16:31:11.913639 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:11Z","lastTransitionTime":"2026-02-18T16:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.017345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.017450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.017470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.017506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.017527 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.121737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.121808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.121823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.121848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.121872 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.225604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.225682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.225702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.225731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.225753 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.328821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.328866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.328874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.328890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.328904 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.432064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.432158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.432230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.432262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.432282 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.507215 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:42:33.810129424 +0000 UTC Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.507506 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.507506 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.507659 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:12 crc kubenswrapper[4812]: E0218 16:31:12.507686 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:12 crc kubenswrapper[4812]: E0218 16:31:12.507706 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.507612 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:12 crc kubenswrapper[4812]: E0218 16:31:12.507900 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:12 crc kubenswrapper[4812]: E0218 16:31:12.508230 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.540404 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.542352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.542414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.542438 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.542467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.542493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.645114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.645719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.645820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.645942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.646160 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.749550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.749623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.749644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.749674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.749739 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.853350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.853428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.853446 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.853473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.853493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.956187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.956236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.956269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.956289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:12 crc kubenswrapper[4812]: I0218 16:31:12.956300 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:12Z","lastTransitionTime":"2026-02-18T16:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.060419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.060508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.060529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.060560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.060582 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.164003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.164063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.164077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.164127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.164146 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.267397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.267460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.267470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.267492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.267505 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.371530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.371607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.371634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.371662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.371685 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.475563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.475666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.475697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.475731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.475753 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.507470 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:11:20.091902647 +0000 UTC Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.579657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.579759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.579783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.579813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.579841 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.683506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.683616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.683636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.683668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.683687 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.787364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.787434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.787451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.787480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.787500 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.891325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.891402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.891426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.891456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.891482 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.994605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.994685 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.994704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.994729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:13 crc kubenswrapper[4812]: I0218 16:31:13.994748 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:13Z","lastTransitionTime":"2026-02-18T16:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.098539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.098597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.098611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.098633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.098648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.207089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.207209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.207247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.207327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.207355 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.311338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.311417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.311453 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.311489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.311513 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.414782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.414859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.414871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.414894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.414909 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.507427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.507462 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.507535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.507575 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:14 crc kubenswrapper[4812]: E0218 16:31:14.507729 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.507702 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:07:34.645892081 +0000 UTC Feb 18 16:31:14 crc kubenswrapper[4812]: E0218 16:31:14.507828 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:14 crc kubenswrapper[4812]: E0218 16:31:14.507901 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:14 crc kubenswrapper[4812]: E0218 16:31:14.507987 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.517960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.518029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.518051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.518080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.518138 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.621967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.622022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.622038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.622068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.622088 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.725572 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.725647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.725667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.725690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.725711 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.829008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.829086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.829141 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.829176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.829198 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.932856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.932941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.932965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.932995 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:14 crc kubenswrapper[4812]: I0218 16:31:14.933017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:14Z","lastTransitionTime":"2026-02-18T16:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.037565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.037666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.037693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.037732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.037759 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.141287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.141342 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.141354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.141375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.141388 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.245417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.245504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.245525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.245557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.245576 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.348764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.348844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.348869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.348896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.348914 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.452922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.452984 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.453000 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.453021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.453038 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.508837 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:15:25.47775636 +0000 UTC Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.557245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.557319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.557338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.557372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.557392 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.661039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.661139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.661158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.661190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.661217 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.765090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.765223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.765247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.765279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.765302 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.868516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.868583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.868598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.868622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.868639 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.971469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.971507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.971543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.971560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:15 crc kubenswrapper[4812]: I0218 16:31:15.971574 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:15Z","lastTransitionTime":"2026-02-18T16:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.075205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.075275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.075305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.075337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.075361 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.178679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.178757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.178775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.178813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.178835 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.282743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.282809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.282826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.282852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.282883 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.386256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.386333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.386352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.386382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.386407 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.490041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.490168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.490189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.490220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.490249 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.508792 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:16 crc kubenswrapper[4812]: E0218 16:31:16.508965 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.509246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:16 crc kubenswrapper[4812]: E0218 16:31:16.509325 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.509487 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:16 crc kubenswrapper[4812]: E0218 16:31:16.509578 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.509759 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:06:35.923747653 +0000 UTC Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.509912 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:16 crc kubenswrapper[4812]: E0218 16:31:16.509996 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.593721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.593776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.593791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.593826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.593841 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.697140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.697216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.697237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.697268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.697287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.800439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.800488 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.800501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.800522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.800538 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.903604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.903663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.903679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.903702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:16 crc kubenswrapper[4812]: I0218 16:31:16.903715 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:16Z","lastTransitionTime":"2026-02-18T16:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.006310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.006388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.006410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.006439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.006460 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.109531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.109596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.109614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.109640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.109659 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.213538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.213604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.213624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.213652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.213673 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.316997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.317069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.317088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.317153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.317177 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.420117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.420170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.420184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.420205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.420218 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.510689 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:01:49.969333752 +0000 UTC Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.523876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.523949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.523974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.523998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.524020 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.627375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.627484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.627521 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.627555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.627579 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.731591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.731704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.731745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.731781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.731805 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.836267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.836334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.836355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.836381 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.836401 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.940434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.940513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.940541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.940579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:17 crc kubenswrapper[4812]: I0218 16:31:17.940605 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:17Z","lastTransitionTime":"2026-02-18T16:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.044598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.044681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.044699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.044727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.044747 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.148084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.148212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.148247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.148285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.148309 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.252142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.252217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.252236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.252268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.252287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.355855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.355934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.355951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.355974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.355990 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.459224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.459297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.459330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.459362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.459385 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.507435 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.507519 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.507556 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.507711 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.507472 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.507884 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.508162 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.508316 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.511346 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:50:29.683952372 +0000 UTC Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.563040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.563207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.563246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.563286 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.563313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.667525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.667607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.667633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.667666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.667689 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.771339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.771441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.771461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.771488 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.771512 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.871690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.871757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.871771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.871794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.871812 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.893185 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:18Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.898583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.898620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.898634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.898658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.898673 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.918621 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:18Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.924395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.924437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.924449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.924469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.924483 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.942936 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:18Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.947682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.947723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.947737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.947755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.947769 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.966820 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:18Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.971474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.971556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.971593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.971627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.971689 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.991313 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T16:31:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"64817a4e-e396-49fc-8ea4-fa691a9f8933\\\",\\\"systemUUID\\\":\\\"98e69d53-b6df-43fa-8be4-eb3c6f91bf68\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T16:31:18Z is after 2025-08-24T17:21:41Z" Feb 18 16:31:18 crc kubenswrapper[4812]: E0218 16:31:18.991434 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.993876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.993916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.993935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.993956 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:18 crc kubenswrapper[4812]: I0218 16:31:18.993970 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:18Z","lastTransitionTime":"2026-02-18T16:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.096751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.096791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.096800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.096819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.096829 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.199430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.199500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.199523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.199552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.199574 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.302843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.302912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.302930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.302956 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.302975 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.406312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.406389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.406414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.406448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.406471 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.509068 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:19 crc kubenswrapper[4812]: E0218 16:31:19.509760 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.509930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.509974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.509995 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.510021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.510046 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.512408 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:33:01.513144536 +0000 UTC Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.613897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.613952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.613965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.613985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.613998 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.717767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.717839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.717864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.717896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.717920 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.822475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.822560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.822584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.822618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.822642 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.925749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.925840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.925880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.925920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:19 crc kubenswrapper[4812]: I0218 16:31:19.925944 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:19Z","lastTransitionTime":"2026-02-18T16:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.029513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.029599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.029623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.029656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.029719 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.133867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.133935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.133951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.133982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.134002 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.237686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.237766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.237785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.237820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.237839 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.341559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.341615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.341630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.341651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.341665 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.444761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.444814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.444826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.444845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.444858 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.507841 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.507961 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.508086 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.508082 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.508185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.508387 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.508537 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.509072 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.512639 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:53:49.100123565 +0000 UTC Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.543584 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.543530535 podStartE2EDuration="28.543530535s" podCreationTimestamp="2026-02-18 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.543252348 +0000 UTC m=+100.808863297" watchObservedRunningTime="2026-02-18 16:31:20.543530535 +0000 UTC m=+100.809141474" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.548923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.549082 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.549133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.549170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.549218 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.625499 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mfnkd" podStartSLOduration=79.625472133 podStartE2EDuration="1m19.625472133s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.625431132 +0000 UTC m=+100.891042051" watchObservedRunningTime="2026-02-18 16:31:20.625472133 +0000 UTC m=+100.891083052" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.649690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.649937 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:31:20 crc kubenswrapper[4812]: E0218 16:31:20.650048 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs podName:713f6ad5-53d1-453f-a193-e8ab26e31b0e nodeName:}" failed. No retries permitted until 2026-02-18 16:32:24.650011894 +0000 UTC m=+164.915622983 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs") pod "network-metrics-daemon-5cqfx" (UID: "713f6ad5-53d1-453f-a193-e8ab26e31b0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.652501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.652539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.652551 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.652572 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.652585 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.657888 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rdcwd" podStartSLOduration=79.657863277 podStartE2EDuration="1m19.657863277s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.641706121 +0000 UTC m=+100.907317030" watchObservedRunningTime="2026-02-18 16:31:20.657863277 +0000 UTC m=+100.923474186" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.674453 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.674425473 podStartE2EDuration="1m20.674425473s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.657797846 +0000 UTC m=+100.923408765" watchObservedRunningTime="2026-02-18 16:31:20.674425473 +0000 UTC m=+100.940036382" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.704463 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-962hh" podStartSLOduration=79.704425595 podStartE2EDuration="1m19.704425595s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.704127688 +0000 UTC m=+100.969738617" watchObservedRunningTime="2026-02-18 16:31:20.704425595 +0000 UTC m=+100.970036524" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.718633 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podStartSLOduration=79.718602458 podStartE2EDuration="1m19.718602458s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.718333992 +0000 UTC m=+100.983944901" watchObservedRunningTime="2026-02-18 16:31:20.718602458 +0000 UTC m=+100.984213367" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.751742 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.751714418 podStartE2EDuration="52.751714418s" podCreationTimestamp="2026-02-18 16:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.750966962 +0000 UTC m=+101.016577871" watchObservedRunningTime="2026-02-18 16:31:20.751714418 +0000 UTC m=+101.017325317" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.754748 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.755000 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.755085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.755208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.755283 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.781938 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.781911285 podStartE2EDuration="8.781911285s" podCreationTimestamp="2026-02-18 16:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.779592843 +0000 UTC m=+101.045203762" watchObservedRunningTime="2026-02-18 16:31:20.781911285 +0000 UTC m=+101.047522194" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.817960 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qhqsd" podStartSLOduration=79.81793862 podStartE2EDuration="1m19.81793862s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.816874076 +0000 UTC m=+101.082484985" watchObservedRunningTime="2026-02-18 16:31:20.81793862 +0000 UTC m=+101.083549529" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.833112 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.833070123 podStartE2EDuration="1m20.833070123s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.832684795 +0000 UTC m=+101.098295714" watchObservedRunningTime="2026-02-18 16:31:20.833070123 +0000 UTC m=+101.098681032" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.857939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.858021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.858040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.858069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.858090 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.960728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.960791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.960810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.960834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:20 crc kubenswrapper[4812]: I0218 16:31:20.960850 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:20Z","lastTransitionTime":"2026-02-18T16:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.063918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.063978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.063992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.064013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.064030 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.167824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.168389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.168588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.168742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.168881 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.272264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.272347 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.272362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.272382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.272396 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.375463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.375525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.375539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.375561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.375580 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.479393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.479456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.479469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.479489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.479506 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.512909 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:16:27.876303059 +0000 UTC Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.583167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.583225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.583242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.583268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.583287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.686393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.686461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.686479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.686517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.686540 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.790277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.790339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.790359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.790383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.790401 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.894202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.894259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.894272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.894297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.894313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.997827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.997909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.997932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.997963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:21 crc kubenswrapper[4812]: I0218 16:31:21.997985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:21Z","lastTransitionTime":"2026-02-18T16:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.101967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.102035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.102060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.102092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.102151 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.205583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.205660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.205683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.205714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.205738 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.308998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.309057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.309134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.309169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.309196 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.413308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.413380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.413404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.413436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.413463 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.507613 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.507707 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:22 crc kubenswrapper[4812]: E0218 16:31:22.507831 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.507856 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.507931 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:22 crc kubenswrapper[4812]: E0218 16:31:22.508215 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:22 crc kubenswrapper[4812]: E0218 16:31:22.508287 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:22 crc kubenswrapper[4812]: E0218 16:31:22.508425 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.513164 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:23:29.977816887 +0000 UTC Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.516319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.516367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.516378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.516392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.516407 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.619336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.619438 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.619457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.619499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.619523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.722987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.723038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.723052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.723075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.723121 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.826940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.827002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.827015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.827041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.827056 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.930835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.930895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.930912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.930938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:22 crc kubenswrapper[4812]: I0218 16:31:22.930956 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:22Z","lastTransitionTime":"2026-02-18T16:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.033887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.034004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.034031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.034071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.034163 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.137584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.137696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.137723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.137761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.137787 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.242136 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.242213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.242236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.242268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.242294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.346534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.346598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.346617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.346640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.346656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.450325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.450376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.450386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.450403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.450413 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.513453 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:04:57.959834827 +0000 UTC Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.553459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.553598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.553642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.553686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.553725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.656921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.656982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.656992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.657012 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.657024 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.760118 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.760177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.760192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.760298 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.760345 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.864511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.864572 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.864584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.864609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.864625 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.967756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.967824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.967838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.967860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:23 crc kubenswrapper[4812]: I0218 16:31:23.967875 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:23Z","lastTransitionTime":"2026-02-18T16:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.071399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.071481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.071508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.071543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.071567 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.174527 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.174587 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.174600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.174624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.174641 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.277360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.277403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.277414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.277430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.277443 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.380353 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.380387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.380397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.380412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.380422 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.482818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.482899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.482917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.482947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.482967 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.508189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.508281 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.508285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.508202 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:24 crc kubenswrapper[4812]: E0218 16:31:24.508439 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:24 crc kubenswrapper[4812]: E0218 16:31:24.508525 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:24 crc kubenswrapper[4812]: E0218 16:31:24.508608 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:24 crc kubenswrapper[4812]: E0218 16:31:24.508731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.513671 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:22:19.012999979 +0000 UTC Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.586654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.586712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.586732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.586758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.586773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.689601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.689677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.689690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.689719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.689736 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.793895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.793941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.793951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.793982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.793993 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.896485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.896547 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.896567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.896591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.896608 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.999207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.999240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.999247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.999260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:24 crc kubenswrapper[4812]: I0218 16:31:24.999271 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:24Z","lastTransitionTime":"2026-02-18T16:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.103311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.103374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.103390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.103414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.103432 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.206295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.206366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.206383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.206413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.206432 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.310227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.310295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.310315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.310340 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.310362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.414169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.414230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.414248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.414270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.414284 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.514736 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:51:06.782966323 +0000 UTC Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.516887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.516952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.516977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.517007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.517029 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.620679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.620768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.620792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.620820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.620848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.724351 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.724428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.724451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.724485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.724511 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.828295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.828372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.828390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.828418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.828437 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.931998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.932063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.932073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.932092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:25 crc kubenswrapper[4812]: I0218 16:31:25.932140 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:25Z","lastTransitionTime":"2026-02-18T16:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.035657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.035728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.035746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.035783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.035803 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.139196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.139268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.139283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.139304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.139320 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.244029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.244125 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.244144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.244170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.244187 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.347825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.347902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.347915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.347938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.347953 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.452474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.452556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.452570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.452599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.452616 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.507320 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.507549 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.508089 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:26 crc kubenswrapper[4812]: E0218 16:31:26.508080 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.508196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:26 crc kubenswrapper[4812]: E0218 16:31:26.508239 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:26 crc kubenswrapper[4812]: E0218 16:31:26.508409 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:26 crc kubenswrapper[4812]: E0218 16:31:26.508540 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.515171 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:14:02.947263179 +0000 UTC Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.556385 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.556454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.556468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.556494 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.556511 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.660453 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.660569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.660594 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.660629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.660657 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.764321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.764389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.764403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.764425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.764442 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.867466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.867808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.867925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.868010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.868078 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.971337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.971391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.971405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.971428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:26 crc kubenswrapper[4812]: I0218 16:31:26.971448 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:26Z","lastTransitionTime":"2026-02-18T16:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.075222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.075335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.075358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.075392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.075417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.178012 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.178085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.178228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.178274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.178299 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.282049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.282121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.282135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.282157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.282172 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.385496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.385563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.385582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.385611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.385632 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.489496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.489584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.489608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.489643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.489669 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.516316 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:13:37.696278823 +0000 UTC Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.592239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.592300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.592315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.592337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.592357 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.695124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.695183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.695200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.695221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.695235 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.798004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.798075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.798126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.798161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.798190 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.901086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.901188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.901223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.901256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:27 crc kubenswrapper[4812]: I0218 16:31:27.901278 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:27Z","lastTransitionTime":"2026-02-18T16:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.004091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.004185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.004223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.004259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.004283 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.106896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.106958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.106981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.107011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.107038 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.210083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.210149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.210160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.210185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.210196 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.313553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.313597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.313606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.313619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.313630 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.416997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.417052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.417064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.417084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.417125 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.507974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.508023 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.508073 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:28 crc kubenswrapper[4812]: E0218 16:31:28.508289 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.508342 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:28 crc kubenswrapper[4812]: E0218 16:31:28.508513 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:28 crc kubenswrapper[4812]: E0218 16:31:28.508603 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:28 crc kubenswrapper[4812]: E0218 16:31:28.508869 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.517441 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:41:31.071197497 +0000 UTC Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.519132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.519170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.519180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.519198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.519209 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.622654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.622731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.622758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.622793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.622819 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.726200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.726256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.726271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.726288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.726299 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.830364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.830834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.830981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.831165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.831333 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.934888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.934945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.934965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.934993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:28 crc kubenswrapper[4812]: I0218 16:31:28.935011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:28Z","lastTransitionTime":"2026-02-18T16:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.039178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.039272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.039296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.039331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.039357 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:29Z","lastTransitionTime":"2026-02-18T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.070699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.071150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.071575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.071795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.072015 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T16:31:29Z","lastTransitionTime":"2026-02-18T16:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.152608 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-prrcg" podStartSLOduration=88.152579174 podStartE2EDuration="1m28.152579174s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:20.863257219 +0000 UTC m=+101.128868128" watchObservedRunningTime="2026-02-18 16:31:29.152579174 +0000 UTC m=+109.418190093" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.153523 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl"] Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.154202 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.157230 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.158398 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.158935 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.158951 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.260027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.260165 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.260234 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.260498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.260615 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.362937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.365000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.369154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.379483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e9a9741-c4bd-4d7b-8605-af4b8f55d04b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-thfsl\" (UID: \"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.485049 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" Feb 18 16:31:29 crc kubenswrapper[4812]: W0218 16:31:29.506152 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e9a9741_c4bd_4d7b_8605_af4b8f55d04b.slice/crio-33fe0cf761a0bd2e01f535236a60b249a65ce8d5a4785973036ac6870ef6635b WatchSource:0}: Error finding container 33fe0cf761a0bd2e01f535236a60b249a65ce8d5a4785973036ac6870ef6635b: Status 404 returned error can't find the container with id 33fe0cf761a0bd2e01f535236a60b249a65ce8d5a4785973036ac6870ef6635b Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.518630 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:05:50.09305247 +0000 UTC Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.518700 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 16:31:29 crc kubenswrapper[4812]: I0218 16:31:29.529771 4812 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.278553 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" event={"ID":"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b","Type":"ContainerStarted","Data":"322b534c4d7f0247f7e47990e5f077727efa169b91c5193922d544580806e1c7"} Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.278636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" event={"ID":"9e9a9741-c4bd-4d7b-8605-af4b8f55d04b","Type":"ContainerStarted","Data":"33fe0cf761a0bd2e01f535236a60b249a65ce8d5a4785973036ac6870ef6635b"} Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.507402 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.507477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.507482 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:30 crc kubenswrapper[4812]: E0218 16:31:30.508769 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:30 crc kubenswrapper[4812]: I0218 16:31:30.508811 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:30 crc kubenswrapper[4812]: E0218 16:31:30.508984 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:30 crc kubenswrapper[4812]: E0218 16:31:30.509237 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:30 crc kubenswrapper[4812]: E0218 16:31:30.509447 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:32 crc kubenswrapper[4812]: I0218 16:31:32.507972 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:32 crc kubenswrapper[4812]: I0218 16:31:32.507971 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:32 crc kubenswrapper[4812]: I0218 16:31:32.508159 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:32 crc kubenswrapper[4812]: E0218 16:31:32.508351 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:32 crc kubenswrapper[4812]: I0218 16:31:32.508414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:32 crc kubenswrapper[4812]: E0218 16:31:32.508821 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:32 crc kubenswrapper[4812]: E0218 16:31:32.508948 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:32 crc kubenswrapper[4812]: E0218 16:31:32.509079 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:32 crc kubenswrapper[4812]: I0218 16:31:32.509362 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:32 crc kubenswrapper[4812]: E0218 16:31:32.509524 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:31:34 crc kubenswrapper[4812]: I0218 16:31:34.508080 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:34 crc kubenswrapper[4812]: I0218 16:31:34.508161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:34 crc kubenswrapper[4812]: I0218 16:31:34.508197 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:34 crc kubenswrapper[4812]: E0218 16:31:34.508304 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:34 crc kubenswrapper[4812]: I0218 16:31:34.508395 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:34 crc kubenswrapper[4812]: E0218 16:31:34.508595 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:34 crc kubenswrapper[4812]: E0218 16:31:34.508726 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:34 crc kubenswrapper[4812]: E0218 16:31:34.508893 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.305411 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/1.log" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.306173 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/0.log" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.306249 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf2b75a7-be08-4a51-b100-9a75359bbd18" containerID="ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0" exitCode=1 Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.306311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerDied","Data":"ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0"} Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.306376 4812 scope.go:117] "RemoveContainer" containerID="796c72a676bec31b20225a60e6d053407fdb84b3a8837a2e3f7c9d89e440dff8" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.307073 4812 scope.go:117] "RemoveContainer" containerID="ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0" Feb 18 16:31:36 crc kubenswrapper[4812]: E0218 16:31:36.307385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-prrcg_openshift-multus(cf2b75a7-be08-4a51-b100-9a75359bbd18)\"" pod="openshift-multus/multus-prrcg" podUID="cf2b75a7-be08-4a51-b100-9a75359bbd18" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.341291 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-thfsl" podStartSLOduration=95.341239234 podStartE2EDuration="1m35.341239234s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:30.301049674 +0000 UTC m=+110.566660663" watchObservedRunningTime="2026-02-18 16:31:36.341239234 +0000 UTC m=+116.606850143" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.507357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.507561 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:36 crc kubenswrapper[4812]: E0218 16:31:36.507634 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.507696 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:36 crc kubenswrapper[4812]: I0218 16:31:36.507868 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:36 crc kubenswrapper[4812]: E0218 16:31:36.507874 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:36 crc kubenswrapper[4812]: E0218 16:31:36.507928 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:36 crc kubenswrapper[4812]: E0218 16:31:36.508000 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:37 crc kubenswrapper[4812]: I0218 16:31:37.311453 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/1.log" Feb 18 16:31:38 crc kubenswrapper[4812]: I0218 16:31:38.507849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:38 crc kubenswrapper[4812]: I0218 16:31:38.507949 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:38 crc kubenswrapper[4812]: E0218 16:31:38.508020 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:38 crc kubenswrapper[4812]: E0218 16:31:38.508231 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:38 crc kubenswrapper[4812]: I0218 16:31:38.508277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:38 crc kubenswrapper[4812]: I0218 16:31:38.508412 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:38 crc kubenswrapper[4812]: E0218 16:31:38.508517 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:38 crc kubenswrapper[4812]: E0218 16:31:38.508999 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.472050 4812 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 16:31:40 crc kubenswrapper[4812]: I0218 16:31:40.508092 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.511011 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:40 crc kubenswrapper[4812]: I0218 16:31:40.511170 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:40 crc kubenswrapper[4812]: I0218 16:31:40.511245 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.511297 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:40 crc kubenswrapper[4812]: I0218 16:31:40.511155 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.511811 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.511926 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:40 crc kubenswrapper[4812]: E0218 16:31:40.642270 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:31:42 crc kubenswrapper[4812]: I0218 16:31:42.507669 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:42 crc kubenswrapper[4812]: I0218 16:31:42.507755 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:42 crc kubenswrapper[4812]: I0218 16:31:42.507754 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:42 crc kubenswrapper[4812]: E0218 16:31:42.507876 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:42 crc kubenswrapper[4812]: I0218 16:31:42.508053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:42 crc kubenswrapper[4812]: E0218 16:31:42.508268 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:42 crc kubenswrapper[4812]: E0218 16:31:42.508428 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:42 crc kubenswrapper[4812]: E0218 16:31:42.508541 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:43 crc kubenswrapper[4812]: I0218 16:31:43.509152 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:43 crc kubenswrapper[4812]: E0218 16:31:43.509495 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v49jp_openshift-ovn-kubernetes(1c8bd0ec-00c8-4cc8-a689-073a151689d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" Feb 18 16:31:44 crc kubenswrapper[4812]: I0218 16:31:44.507773 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:44 crc kubenswrapper[4812]: I0218 16:31:44.507844 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:44 crc kubenswrapper[4812]: I0218 16:31:44.507858 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:44 crc kubenswrapper[4812]: E0218 16:31:44.508048 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:44 crc kubenswrapper[4812]: I0218 16:31:44.508081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:44 crc kubenswrapper[4812]: E0218 16:31:44.508298 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:44 crc kubenswrapper[4812]: E0218 16:31:44.508506 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:44 crc kubenswrapper[4812]: E0218 16:31:44.508617 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:45 crc kubenswrapper[4812]: E0218 16:31:45.644524 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:31:46 crc kubenswrapper[4812]: I0218 16:31:46.508214 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:46 crc kubenswrapper[4812]: I0218 16:31:46.508238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:46 crc kubenswrapper[4812]: I0218 16:31:46.508530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:46 crc kubenswrapper[4812]: I0218 16:31:46.508566 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:46 crc kubenswrapper[4812]: E0218 16:31:46.508767 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:46 crc kubenswrapper[4812]: E0218 16:31:46.508818 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:46 crc kubenswrapper[4812]: E0218 16:31:46.508841 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:46 crc kubenswrapper[4812]: E0218 16:31:46.508712 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:48 crc kubenswrapper[4812]: I0218 16:31:48.513926 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:48 crc kubenswrapper[4812]: I0218 16:31:48.514091 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:48 crc kubenswrapper[4812]: E0218 16:31:48.514271 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:48 crc kubenswrapper[4812]: E0218 16:31:48.514457 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:48 crc kubenswrapper[4812]: I0218 16:31:48.514707 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:48 crc kubenswrapper[4812]: E0218 16:31:48.514867 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:48 crc kubenswrapper[4812]: I0218 16:31:48.515056 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:48 crc kubenswrapper[4812]: E0218 16:31:48.515436 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:50 crc kubenswrapper[4812]: I0218 16:31:50.507700 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:50 crc kubenswrapper[4812]: I0218 16:31:50.507857 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:50 crc kubenswrapper[4812]: I0218 16:31:50.507902 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:50 crc kubenswrapper[4812]: I0218 16:31:50.507961 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:50 crc kubenswrapper[4812]: E0218 16:31:50.510265 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:50 crc kubenswrapper[4812]: E0218 16:31:50.510380 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:50 crc kubenswrapper[4812]: I0218 16:31:50.510482 4812 scope.go:117] "RemoveContainer" containerID="ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0" Feb 18 16:31:50 crc kubenswrapper[4812]: E0218 16:31:50.510617 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:50 crc kubenswrapper[4812]: E0218 16:31:50.510474 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:50 crc kubenswrapper[4812]: E0218 16:31:50.645815 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:31:51 crc kubenswrapper[4812]: I0218 16:31:51.377038 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/1.log" Feb 18 16:31:51 crc kubenswrapper[4812]: I0218 16:31:51.377143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerStarted","Data":"c9cc37b9bafc7a9f647bcdd5d7319d73c4ed7efbbbde1b2c61a0de90b6b92e56"} Feb 18 16:31:52 crc kubenswrapper[4812]: I0218 16:31:52.507274 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:52 crc kubenswrapper[4812]: I0218 16:31:52.507287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:52 crc kubenswrapper[4812]: E0218 16:31:52.507562 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:52 crc kubenswrapper[4812]: I0218 16:31:52.507287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:52 crc kubenswrapper[4812]: E0218 16:31:52.507694 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:52 crc kubenswrapper[4812]: I0218 16:31:52.507314 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:52 crc kubenswrapper[4812]: E0218 16:31:52.507781 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:52 crc kubenswrapper[4812]: E0218 16:31:52.507822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:54 crc kubenswrapper[4812]: I0218 16:31:54.507562 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:54 crc kubenswrapper[4812]: I0218 16:31:54.507612 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:54 crc kubenswrapper[4812]: I0218 16:31:54.507662 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:54 crc kubenswrapper[4812]: I0218 16:31:54.507605 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:54 crc kubenswrapper[4812]: E0218 16:31:54.507772 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:54 crc kubenswrapper[4812]: E0218 16:31:54.507903 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:54 crc kubenswrapper[4812]: E0218 16:31:54.508199 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:54 crc kubenswrapper[4812]: E0218 16:31:54.508361 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:55 crc kubenswrapper[4812]: E0218 16:31:55.647953 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:31:56 crc kubenswrapper[4812]: I0218 16:31:56.507256 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:56 crc kubenswrapper[4812]: I0218 16:31:56.507374 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:56 crc kubenswrapper[4812]: E0218 16:31:56.507482 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:56 crc kubenswrapper[4812]: I0218 16:31:56.507587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:56 crc kubenswrapper[4812]: E0218 16:31:56.507731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:56 crc kubenswrapper[4812]: E0218 16:31:56.507925 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:56 crc kubenswrapper[4812]: I0218 16:31:56.508596 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:56 crc kubenswrapper[4812]: E0218 16:31:56.508873 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:58 crc kubenswrapper[4812]: I0218 16:31:58.508137 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:31:58 crc kubenswrapper[4812]: I0218 16:31:58.508236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:31:58 crc kubenswrapper[4812]: I0218 16:31:58.508236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:31:58 crc kubenswrapper[4812]: I0218 16:31:58.508259 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:58 crc kubenswrapper[4812]: E0218 16:31:58.508381 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:31:58 crc kubenswrapper[4812]: E0218 16:31:58.508515 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:58 crc kubenswrapper[4812]: E0218 16:31:58.509017 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:31:58 crc kubenswrapper[4812]: E0218 16:31:58.509316 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:31:58 crc kubenswrapper[4812]: I0218 16:31:58.509571 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.393679 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5cqfx"] Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.414052 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/3.log" Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.418239 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerStarted","Data":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.418262 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:31:59 crc kubenswrapper[4812]: E0218 16:31:59.418549 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.419091 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:31:59 crc kubenswrapper[4812]: I0218 16:31:59.471960 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podStartSLOduration=118.471933077 podStartE2EDuration="1m58.471933077s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:31:59.471513288 +0000 UTC m=+139.737124207" watchObservedRunningTime="2026-02-18 16:31:59.471933077 +0000 UTC m=+139.737543996" Feb 18 16:32:00 crc kubenswrapper[4812]: I0218 16:32:00.507356 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:00 crc kubenswrapper[4812]: I0218 16:32:00.507503 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:00 crc kubenswrapper[4812]: I0218 16:32:00.507577 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:00 crc kubenswrapper[4812]: E0218 16:32:00.515396 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:32:00 crc kubenswrapper[4812]: E0218 16:32:00.515723 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:32:00 crc kubenswrapper[4812]: E0218 16:32:00.515933 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:32:00 crc kubenswrapper[4812]: E0218 16:32:00.648827 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:32:01 crc kubenswrapper[4812]: I0218 16:32:01.507730 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:01 crc kubenswrapper[4812]: E0218 16:32:01.508339 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:32:02 crc kubenswrapper[4812]: I0218 16:32:02.507808 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:02 crc kubenswrapper[4812]: I0218 16:32:02.507928 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:02 crc kubenswrapper[4812]: E0218 16:32:02.508050 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:32:02 crc kubenswrapper[4812]: I0218 16:32:02.508233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:02 crc kubenswrapper[4812]: E0218 16:32:02.508384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:32:02 crc kubenswrapper[4812]: E0218 16:32:02.508510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:32:03 crc kubenswrapper[4812]: I0218 16:32:03.414049 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:32:03 crc kubenswrapper[4812]: I0218 16:32:03.414244 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:32:03 crc kubenswrapper[4812]: I0218 16:32:03.508285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:03 crc kubenswrapper[4812]: E0218 16:32:03.508662 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:32:04 crc kubenswrapper[4812]: I0218 16:32:04.507793 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:04 crc kubenswrapper[4812]: I0218 16:32:04.507934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:04 crc kubenswrapper[4812]: I0218 16:32:04.507833 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:04 crc kubenswrapper[4812]: E0218 16:32:04.508065 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 16:32:04 crc kubenswrapper[4812]: E0218 16:32:04.508180 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 16:32:04 crc kubenswrapper[4812]: E0218 16:32:04.508262 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 16:32:05 crc kubenswrapper[4812]: I0218 16:32:05.507658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:05 crc kubenswrapper[4812]: E0218 16:32:05.508323 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5cqfx" podUID="713f6ad5-53d1-453f-a193-e8ab26e31b0e" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.508454 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.508518 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.508724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.511893 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.512502 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.514471 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 16:32:06 crc kubenswrapper[4812]: I0218 16:32:06.515501 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 16:32:07 crc kubenswrapper[4812]: I0218 16:32:07.507852 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:07 crc kubenswrapper[4812]: I0218 16:32:07.511595 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 16:32:07 crc kubenswrapper[4812]: I0218 16:32:07.512345 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.291977 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:08 crc kubenswrapper[4812]: E0218 16:32:08.292201 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:34:10.292154435 +0000 UTC m=+270.557765374 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.292316 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.292391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.292448 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.299974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.300849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.314087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.339152 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.362746 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.393324 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.400137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:08 crc kubenswrapper[4812]: I0218 16:32:08.650902 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:08 crc kubenswrapper[4812]: W0218 16:32:08.838950 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b451666791c1e411c909f5aa2cc1656d549bd48d73ea824132781bc7f4f52a0b WatchSource:0}: Error finding container b451666791c1e411c909f5aa2cc1656d549bd48d73ea824132781bc7f4f52a0b: Status 404 returned error can't find the container with id b451666791c1e411c909f5aa2cc1656d549bd48d73ea824132781bc7f4f52a0b Feb 18 16:32:08 crc kubenswrapper[4812]: W0218 16:32:08.934326 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-3bc6af6815561dbac12e6d15c4d12535af30717d088414384ab5ed538596b456 WatchSource:0}: Error finding container 3bc6af6815561dbac12e6d15c4d12535af30717d088414384ab5ed538596b456: Status 404 returned error can't find the container with id 3bc6af6815561dbac12e6d15c4d12535af30717d088414384ab5ed538596b456 Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.470200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a0468fe1b217d7c6362126c8a2620c9a75022a1a8fd09985fbaf20b3e9dc5f2f"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.470802 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3bc6af6815561dbac12e6d15c4d12535af30717d088414384ab5ed538596b456"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.471417 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.472053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"63a331cb7c09c835f1406ec46bcdc664ecf550446f60463757d08104b1501ef6"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.472084 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b451666791c1e411c909f5aa2cc1656d549bd48d73ea824132781bc7f4f52a0b"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.474441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0e2810410ef1302754ded8626a464ec54d76ae8b5e11e839d892998505e9da83"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.474470 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"57ea5e6ffaa3348e0e7164a618c3635bf177a1818b2d19f993471665c902c73b"} Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.687291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.733825 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrhdr"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.734806 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.738567 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.739114 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.740234 4812 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.740275 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741024 4812 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.741049 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741146 4812 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.741159 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741313 4812 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.741338 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741376 4812 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.741450 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741895 4812 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.741929 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.741963 4812 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.742059 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.743235 4812 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.743273 4812 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.743273 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.743329 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.743292 4812 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.743380 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.743447 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.743887 4812 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.743923 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.743934 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.743964 4812 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.744035 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.744837 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.746044 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.746935 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.748454 4812 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.748520 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.752342 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.752752 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.752413 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.752342 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.752951 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.754275 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pm7xx"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.755359 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.755572 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.755690 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.755789 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.758853 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vk2pm"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.759266 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.759712 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.760244 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.760756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.762698 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.763289 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.763824 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.764535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.765502 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-chx4h"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.765898 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.772696 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.773199 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 16:32:09 crc kubenswrapper[4812]: W0218 16:32:09.773531 4812 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Feb 18 16:32:09 crc kubenswrapper[4812]: E0218 16:32:09.773572 4812 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.773656 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.773798 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.773940 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.774065 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.779886 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.780512 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.782923 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zqqrs"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.783566 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.783930 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.784207 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.783936 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.784533 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.784763 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.785055 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.785206 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.785693 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.785803 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.785907 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.786153 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.786783 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.787014 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.787055 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.787123 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.788006 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.788122 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.788192 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.788295 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.792490 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.796247 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.801292 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.801597 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.801678 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.802045 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.802818 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.802951 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.803456 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.803900 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.804013 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.804130 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.810698 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r748z"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.819749 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.820762 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.820614 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.821689 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.821749 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.821748 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.821950 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.846597 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.846969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.847333 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.850001 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.850388 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.850687 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.854993 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.859537 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.860566 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.860079 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.867006 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.860771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.868578 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2dg5"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.868950 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.869007 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.868961 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.869955 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx6ch"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870118 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870382 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870539 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870367 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870863 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870385 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870497 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.871151 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.870544 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.871086 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.871756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.873389 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.873538 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.876986 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.879737 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xs668"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.880540 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.894407 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.895341 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.899578 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.900032 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.900465 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.900937 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.901025 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.901671 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.901871 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902056 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902224 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902450 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902695 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902744 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902800 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902807 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.902926 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.903045 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.904111 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.904528 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.904706 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.904836 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.904894 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.906546 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.907247 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.911504 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.911703 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84npp"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.912437 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.913611 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914077 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9123084a-7c6e-463f-b006-ac02cc61c7b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjtf\" (UniqueName: \"kubernetes.io/projected/9123084a-7c6e-463f-b006-ac02cc61c7b9-kube-api-access-8tjtf\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-node-pullsecrets\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914220 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit-dir\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914245 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vpj\" (UniqueName: \"kubernetes.io/projected/d58bf47e-907b-42f2-89b0-919ee60b253e-kube-api-access-d9vpj\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914294 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914318 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jvh\" (UniqueName: \"kubernetes.io/projected/fd51434e-e723-41f3-9885-4edee69d2537-kube-api-access-r5jvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914344 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv59v\" (UniqueName: \"kubernetes.io/projected/d7f6a188-11db-48fd-b4e0-58abfe97aa07-kube-api-access-gv59v\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914368 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914390 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914414 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-trusted-ca\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914435 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73cc9692-bbfe-48a7-865b-c2b4ac637527-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914457 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kf7w\" (UniqueName: \"kubernetes.io/projected/e3cb5421-061d-41dc-a07d-1102a60ac54f-kube-api-access-4kf7w\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914620 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914677 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-client\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-encryption-config\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914909 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-dir\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.914996 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9123084a-7c6e-463f-b006-ac02cc61c7b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915072 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915174 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915589 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq67\" (UniqueName: \"kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq69d\" (UniqueName: \"kubernetes.io/projected/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-kube-api-access-nq69d\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915699 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73cc9692-bbfe-48a7-865b-c2b4ac637527-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915724 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-config\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd51434e-e723-41f3-9885-4edee69d2537-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5zd\" (UniqueName: \"kubernetes.io/projected/1299fd10-7a60-463b-a99d-a0674c3741f1-kube-api-access-qx5zd\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915877 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-policies\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915955 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjnm\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-kube-api-access-mxjnm\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.915981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6588p\" (UniqueName: \"kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916051 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1299fd10-7a60-463b-a99d-a0674c3741f1-serving-cert\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916071 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kfq\" (UniqueName: \"kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd51434e-e723-41f3-9885-4edee69d2537-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6rk\" (UniqueName: \"kubernetes.io/projected/4428dc60-fd63-4b22-8589-08c8ac3dde08-kube-api-access-6s6rk\") pod \"downloads-7954f5f757-chx4h\" (UID: \"4428dc60-fd63-4b22-8589-08c8ac3dde08\") " pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916332 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-client\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916946 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.916984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917016 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917035 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-serving-cert\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917061 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917082 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-config\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917120 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917151 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cb5421-061d-41dc-a07d-1102a60ac54f-serving-cert\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917210 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-service-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917395 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.917454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.919208 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.919997 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.920118 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.920934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.921329 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.921872 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.923333 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.923937 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.924652 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-flhsn"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.925537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.928692 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzqj7"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.929670 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.930363 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.930606 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.930818 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.931076 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.932220 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrhdr"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.932771 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.933659 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.933712 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d2crn"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.934502 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.934983 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.935676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.937216 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.937446 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.939031 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.939747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.940065 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.940717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.942779 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bwprw"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.943519 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.943632 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.944562 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pm7xx"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.947139 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vk2pm"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.949185 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.952456 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zqqrs"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.954977 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.956581 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.957959 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.959059 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx6ch"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.960064 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.962212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.964137 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.970599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.975831 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2dg5"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.977355 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.978843 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzqj7"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.983779 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84npp"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.985497 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.986821 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.994431 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r748z"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.996905 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v"] Feb 18 16:32:09 crc kubenswrapper[4812]: I0218 16:32:09.999163 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.004635 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-flhsn"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.010384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.011949 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.013858 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-chx4h"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.016647 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.018722 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.019169 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.019295 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-images\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.019423 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5h2l\" (UniqueName: \"kubernetes.io/projected/a3e79a06-f5d4-407d-b601-8385a4d9c32e-kube-api-access-g5h2l\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.019619 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.019932 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.020154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.020184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.020228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-config\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.020269 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021187 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6rk\" (UniqueName: \"kubernetes.io/projected/4428dc60-fd63-4b22-8589-08c8ac3dde08-kube-api-access-6s6rk\") pod \"downloads-7954f5f757-chx4h\" (UID: \"4428dc60-fd63-4b22-8589-08c8ac3dde08\") " pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021248 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-client\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021287 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-serving-cert\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021384 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021719 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-config\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-config\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021786 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cb5421-061d-41dc-a07d-1102a60ac54f-serving-cert\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-service-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021856 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.021959 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzh84\" (UniqueName: \"kubernetes.io/projected/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-kube-api-access-zzh84\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-metrics-certs\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022060 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-node-pullsecrets\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit-dir\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022143 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vpj\" (UniqueName: \"kubernetes.io/projected/d58bf47e-907b-42f2-89b0-919ee60b253e-kube-api-access-d9vpj\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f40305-18d9-499e-90db-aed66391bcf0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9123084a-7c6e-463f-b006-ac02cc61c7b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjtf\" (UniqueName: \"kubernetes.io/projected/9123084a-7c6e-463f-b006-ac02cc61c7b9-kube-api-access-8tjtf\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-default-certificate\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022297 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jvh\" (UniqueName: \"kubernetes.io/projected/fd51434e-e723-41f3-9885-4edee69d2537-kube-api-access-r5jvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv59v\" (UniqueName: \"kubernetes.io/projected/d7f6a188-11db-48fd-b4e0-58abfe97aa07-kube-api-access-gv59v\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022342 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022396 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022441 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-trusted-ca\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73cc9692-bbfe-48a7-865b-c2b4ac637527-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022492 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-service-ca-bundle\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022523 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kf7w\" (UniqueName: \"kubernetes.io/projected/e3cb5421-061d-41dc-a07d-1102a60ac54f-kube-api-access-4kf7w\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-client\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-encryption-config\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-images\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022750 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-client\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022801 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-stats-auth\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022837 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.022972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f40305-18d9-499e-90db-aed66391bcf0-config\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023124 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023156 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-dir\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6qg\" (UniqueName: \"kubernetes.io/projected/3e6ebf72-1d36-465d-9326-1923d28d5c28-kube-api-access-wq6qg\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023210 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-apiservice-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023233 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-service-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9123084a-7c6e-463f-b006-ac02cc61c7b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023342 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023376 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw59v\" (UniqueName: \"kubernetes.io/projected/f78ac4e9-599f-466f-ad93-2e945ea78dc9-kube-api-access-sw59v\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023422 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023453 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq69d\" (UniqueName: \"kubernetes.io/projected/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-kube-api-access-nq69d\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023468 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvq67\" (UniqueName: \"kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023525 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73cc9692-bbfe-48a7-865b-c2b4ac637527-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023545 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqt9\" (UniqueName: \"kubernetes.io/projected/9cc05670-962b-48fc-a2c3-ad79a606f32c-kube-api-access-kjqt9\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023569 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-config\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd51434e-e723-41f3-9885-4edee69d2537-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023644 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f40305-18d9-499e-90db-aed66391bcf0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023661 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc05670-962b-48fc-a2c3-ad79a606f32c-proxy-tls\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-policies\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f78ac4e9-599f-466f-ad93-2e945ea78dc9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-serving-cert\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5zd\" (UniqueName: \"kubernetes.io/projected/1299fd10-7a60-463b-a99d-a0674c3741f1-kube-api-access-qx5zd\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023807 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kxc\" (UniqueName: \"kubernetes.io/projected/70067b7f-0a79-444f-8041-8683d4ae95b2-kube-api-access-n7kxc\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023844 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjnm\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-kube-api-access-mxjnm\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.027399 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-node-pullsecrets\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.027529 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit-dir\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.027719 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.023863 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.027902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-webhook-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.027769 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.028305 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.028309 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-dir\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6588p\" (UniqueName: \"kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dln\" (UniqueName: \"kubernetes.io/projected/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-kube-api-access-j5dln\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029315 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ebf72-1d36-465d-9326-1923d28d5c28-tmpfs\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70067b7f-0a79-444f-8041-8683d4ae95b2-metrics-tls\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.029980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd51434e-e723-41f3-9885-4edee69d2537-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.030043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.030084 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1299fd10-7a60-463b-a99d-a0674c3741f1-serving-cert\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.030191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kfq\" (UniqueName: \"kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.031449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9123084a-7c6e-463f-b006-ac02cc61c7b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.034841 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-serving-cert\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.035403 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-config\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.035648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.035660 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.035814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.035960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.036074 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.036578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9123084a-7c6e-463f-b006-ac02cc61c7b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.036939 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1299fd10-7a60-463b-a99d-a0674c3741f1-service-ca-bundle\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.037029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3cb5421-061d-41dc-a07d-1102a60ac54f-serving-cert\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.037860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.037872 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.038330 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.039200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73cc9692-bbfe-48a7-865b-c2b4ac637527-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.039260 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.039441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd51434e-e723-41f3-9885-4edee69d2537-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.039900 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.040133 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-config\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041665 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041853 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.041944 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-client\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.044162 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58bf47e-907b-42f2-89b0-919ee60b253e-audit-policies\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.044191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e3cb5421-061d-41dc-a07d-1102a60ac54f-trusted-ca\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.044396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-etcd-client\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.044431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.045085 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.045115 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.045782 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd51434e-e723-41f3-9885-4edee69d2537-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.046261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.046531 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.046702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58bf47e-907b-42f2-89b0-919ee60b253e-encryption-config\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.047331 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1299fd10-7a60-463b-a99d-a0674c3741f1-serving-cert\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.047401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.048360 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/73cc9692-bbfe-48a7-865b-c2b4ac637527-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.048813 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2kptm"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.049701 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.050056 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.051726 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.053108 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d2crn"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.054150 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.055080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.056139 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.056655 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.057481 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.058702 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2kptm"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.059296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.060656 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kjggh"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.061509 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.061724 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9lm7j"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.062919 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kjggh"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.065400 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9lm7j"] Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.063214 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.083907 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.096711 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.117451 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.130953 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-serving-cert\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f78ac4e9-599f-466f-ad93-2e945ea78dc9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kxc\" (UniqueName: \"kubernetes.io/projected/70067b7f-0a79-444f-8041-8683d4ae95b2-kube-api-access-n7kxc\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131071 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-webhook-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dln\" (UniqueName: \"kubernetes.io/projected/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-kube-api-access-j5dln\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ebf72-1d36-465d-9326-1923d28d5c28-tmpfs\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131151 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70067b7f-0a79-444f-8041-8683d4ae95b2-metrics-tls\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131183 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-images\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5h2l\" (UniqueName: \"kubernetes.io/projected/a3e79a06-f5d4-407d-b601-8385a4d9c32e-kube-api-access-g5h2l\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131232 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-config\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-config\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131297 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzh84\" (UniqueName: \"kubernetes.io/projected/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-kube-api-access-zzh84\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-metrics-certs\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131331 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f40305-18d9-499e-90db-aed66391bcf0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-default-certificate\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-service-ca-bundle\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131441 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-images\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-client\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-stats-auth\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f40305-18d9-499e-90db-aed66391bcf0-config\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131569 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-apiservice-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131593 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6qg\" (UniqueName: \"kubernetes.io/projected/3e6ebf72-1d36-465d-9326-1923d28d5c28-kube-api-access-wq6qg\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131618 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-service-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw59v\" (UniqueName: \"kubernetes.io/projected/f78ac4e9-599f-466f-ad93-2e945ea78dc9-kube-api-access-sw59v\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131723 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjqt9\" (UniqueName: \"kubernetes.io/projected/9cc05670-962b-48fc-a2c3-ad79a606f32c-kube-api-access-kjqt9\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f40305-18d9-499e-90db-aed66391bcf0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.131777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc05670-962b-48fc-a2c3-ad79a606f32c-proxy-tls\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.132626 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ebf72-1d36-465d-9326-1923d28d5c28-tmpfs\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.132725 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-config\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.132749 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f40305-18d9-499e-90db-aed66391bcf0-config\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.133009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.133017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-images\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.134052 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.134379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-config\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.134437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-service-ca\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.135377 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-etcd-client\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.135874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f78ac4e9-599f-466f-ad93-2e945ea78dc9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.136565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.136632 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70067b7f-0a79-444f-8041-8683d4ae95b2-metrics-tls\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.136857 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.137194 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e79a06-f5d4-407d-b601-8385a4d9c32e-serving-cert\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.138000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f40305-18d9-499e-90db-aed66391bcf0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.177913 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.198352 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.206953 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-metrics-certs\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.218969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.226645 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-default-certificate\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.237645 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.249186 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-stats-auth\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.258020 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.277935 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.297301 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.302456 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-service-ca-bundle\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.317885 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.337620 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.357058 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.376080 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.396161 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.417270 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.437513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.457572 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.477291 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.497453 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.517005 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.536617 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.556327 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.577315 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.583340 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9cc05670-962b-48fc-a2c3-ad79a606f32c-images\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.597934 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.617373 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.625195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc05670-962b-48fc-a2c3-ad79a606f32c-proxy-tls\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.637392 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.657114 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.676760 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.704316 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.719081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-apiservice-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.728066 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e6ebf72-1d36-465d-9326-1923d28d5c28-webhook-cert\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.739588 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.758730 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.777772 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.796543 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.816740 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.837585 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.857223 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.877227 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.897202 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.916928 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.935823 4812 request.go:700] Waited for 1.009870367s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&limit=500&resourceVersion=0 Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.938739 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.957493 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.976821 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 16:32:10 crc kubenswrapper[4812]: I0218 16:32:10.998320 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.018866 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.025053 4812 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.026418 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.526360668 +0000 UTC m=+151.791971577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028430 4812 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028484 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.528472275 +0000 UTC m=+151.794083184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028495 4812 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028526 4812 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028538 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.528525996 +0000 UTC m=+151.794136905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028551 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.528544837 +0000 UTC m=+151.794155746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028567 4812 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.028598 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert podName:6e8b5315-44dc-4ece-bd4c-2accb8b466c6 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.528586047 +0000 UTC m=+151.794196956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-9lvnh" (UID: "6e8b5315-44dc-4ece-bd4c-2accb8b466c6") : failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.029720 4812 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.029857 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.529827605 +0000 UTC m=+151.795438534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync secret cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.029956 4812 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.029996 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.529988288 +0000 UTC m=+151.795599187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.030797 4812 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: E0218 16:32:11.030835 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca podName:d7f6a188-11db-48fd-b4e0-58abfe97aa07 nodeName:}" failed. No retries permitted until 2026-02-18 16:32:11.530826317 +0000 UTC m=+151.796437226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca") pod "apiserver-76f77b778f-xrhdr" (UID: "d7f6a188-11db-48fd-b4e0-58abfe97aa07") : failed to sync configmap cache: timed out waiting for the condition Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.036997 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.056916 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.077461 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.098556 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.117967 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.148503 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.160825 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.177530 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.198467 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.217683 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.246539 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.258397 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.286854 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.298543 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.317476 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.337982 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.358796 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.377674 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.398202 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.417321 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.438034 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.457653 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.477211 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.499484 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.523692 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.537913 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.556641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.557022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.557362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.557611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.557858 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.558261 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.558544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.558826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.559168 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.577763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.597889 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.643465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjtf\" (UniqueName: \"kubernetes.io/projected/9123084a-7c6e-463f-b006-ac02cc61c7b9-kube-api-access-8tjtf\") pod \"openshift-config-operator-7777fb866f-9ztgl\" (UID: \"9123084a-7c6e-463f-b006-ac02cc61c7b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.675757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jvh\" (UniqueName: \"kubernetes.io/projected/fd51434e-e723-41f3-9885-4edee69d2537-kube-api-access-r5jvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwmt7\" (UID: \"fd51434e-e723-41f3-9885-4edee69d2537\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.707382 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.708129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vpj\" (UniqueName: \"kubernetes.io/projected/d58bf47e-907b-42f2-89b0-919ee60b253e-kube-api-access-d9vpj\") pod \"apiserver-7bbb656c7d-d9qws\" (UID: \"d58bf47e-907b-42f2-89b0-919ee60b253e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.712757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjnm\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-kube-api-access-mxjnm\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.742755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73cc9692-bbfe-48a7-865b-c2b4ac637527-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xndx9\" (UID: \"73cc9692-bbfe-48a7-865b-c2b4ac637527\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.763774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kf7w\" (UniqueName: \"kubernetes.io/projected/e3cb5421-061d-41dc-a07d-1102a60ac54f-kube-api-access-4kf7w\") pod \"console-operator-58897d9998-zqqrs\" (UID: \"e3cb5421-061d-41dc-a07d-1102a60ac54f\") " pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.766364 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.772202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6rk\" (UniqueName: \"kubernetes.io/projected/4428dc60-fd63-4b22-8589-08c8ac3dde08-kube-api-access-6s6rk\") pod \"downloads-7954f5f757-chx4h\" (UID: \"4428dc60-fd63-4b22-8589-08c8ac3dde08\") " pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.774536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.822192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvq67\" (UniqueName: \"kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67\") pod \"oauth-openshift-558db77b4-pm7xx\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.834620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5zd\" (UniqueName: \"kubernetes.io/projected/1299fd10-7a60-463b-a99d-a0674c3741f1-kube-api-access-qx5zd\") pod \"authentication-operator-69f744f599-vk2pm\" (UID: \"1299fd10-7a60-463b-a99d-a0674c3741f1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.854794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6588p\" (UniqueName: \"kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p\") pod \"route-controller-manager-6576b87f9c-vt8tn\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.876358 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kfq\" (UniqueName: \"kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq\") pod \"console-f9d7485db-blqkx\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.879598 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.900708 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.903656 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.912212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl"] Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.920543 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.920543 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 16:32:11 crc kubenswrapper[4812]: W0218 16:32:11.928506 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9123084a_7c6e_463f_b006_ac02cc61c7b9.slice/crio-9084107c375935391a4655ec435f95201ee009614d2848b39fcb63777a1f8487 WatchSource:0}: Error finding container 9084107c375935391a4655ec435f95201ee009614d2848b39fcb63777a1f8487: Status 404 returned error can't find the container with id 9084107c375935391a4655ec435f95201ee009614d2848b39fcb63777a1f8487 Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.936351 4812 request.go:700] Waited for 1.886160902s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.939202 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.953434 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.963790 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.973498 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" Feb 18 16:32:11 crc kubenswrapper[4812]: I0218 16:32:11.977905 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.003316 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.003769 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zqqrs"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.017925 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.020350 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.031676 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.038735 4812 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.039712 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.059595 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 16:32:12 crc kubenswrapper[4812]: W0218 16:32:12.095283 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd51434e_e723_41f3_9885_4edee69d2537.slice/crio-6f1c68fe4cb0f7fd3ab44b163aa4cd8c267bedb907f97d6c1ca7b739759f2f1a WatchSource:0}: Error finding container 6f1c68fe4cb0f7fd3ab44b163aa4cd8c267bedb907f97d6c1ca7b739759f2f1a: Status 404 returned error can't find the container with id 6f1c68fe4cb0f7fd3ab44b163aa4cd8c267bedb907f97d6c1ca7b739759f2f1a Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.104247 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f40305-18d9-499e-90db-aed66391bcf0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vgpcj\" (UID: \"90f40305-18d9-499e-90db-aed66391bcf0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.113328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzh84\" (UniqueName: \"kubernetes.io/projected/7f8a50e5-17af-449c-9e9f-ff051ba9c99f-kube-api-access-zzh84\") pod \"machine-api-operator-5694c8668f-r748z\" (UID: \"7f8a50e5-17af-449c-9e9f-ff051ba9c99f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.139030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kxc\" (UniqueName: \"kubernetes.io/projected/70067b7f-0a79-444f-8041-8683d4ae95b2-kube-api-access-n7kxc\") pod \"dns-operator-744455d44c-t2dg5\" (UID: \"70067b7f-0a79-444f-8041-8683d4ae95b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.148741 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.160602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dln\" (UniqueName: \"kubernetes.io/projected/583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b-kube-api-access-j5dln\") pod \"router-default-5444994796-xs668\" (UID: \"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b\") " pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.177825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5h2l\" (UniqueName: \"kubernetes.io/projected/a3e79a06-f5d4-407d-b601-8385a4d9c32e-kube-api-access-g5h2l\") pod \"etcd-operator-b45778765-hx6ch\" (UID: \"a3e79a06-f5d4-407d-b601-8385a4d9c32e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.183839 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.194543 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.197591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6qg\" (UniqueName: \"kubernetes.io/projected/3e6ebf72-1d36-465d-9326-1923d28d5c28-kube-api-access-wq6qg\") pod \"packageserver-d55dfcdfc-qsrdf\" (UID: \"3e6ebf72-1d36-465d-9326-1923d28d5c28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.206254 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.224127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw59v\" (UniqueName: \"kubernetes.io/projected/f78ac4e9-599f-466f-ad93-2e945ea78dc9-kube-api-access-sw59v\") pod \"cluster-samples-operator-665b6dd947-lwcr8\" (UID: \"f78ac4e9-599f-466f-ad93-2e945ea78dc9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.245677 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjqt9\" (UniqueName: \"kubernetes.io/projected/9cc05670-962b-48fc-a2c3-ad79a606f32c-kube-api-access-kjqt9\") pod \"machine-config-operator-74547568cd-84npp\" (UID: \"9cc05670-962b-48fc-a2c3-ad79a606f32c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.254792 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.257630 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.260039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.277194 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.283657 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pm7xx"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.320901 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.327870 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-image-import-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.334613 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.337377 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.347321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv59v\" (UniqueName: \"kubernetes.io/projected/d7f6a188-11db-48fd-b4e0-58abfe97aa07-kube-api-access-gv59v\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: W0218 16:32:12.358351 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee898c2_0a23_41cb_a680_709b6e8104ff.slice/crio-b8c5d5c4080b200667b461f179a09dd4ad91aeaab254d2e5de209a5214b035aa WatchSource:0}: Error finding container b8c5d5c4080b200667b461f179a09dd4ad91aeaab254d2e5de209a5214b035aa: Status 404 returned error can't find the container with id b8c5d5c4080b200667b461f179a09dd4ad91aeaab254d2e5de209a5214b035aa Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.359851 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.369054 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.369873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-etcd-serving-ca\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.370521 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbfq4\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.370662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.370753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.370883 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371247 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f5c004-b206-4144-9754-456af64c615e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29f5c004-b206-4144-9754-456af64c615e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.371906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.372439 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.372686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f5c004-b206-4144-9754-456af64c615e-config\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.372871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdnck\" (UniqueName: \"kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.372968 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:12.872944673 +0000 UTC m=+153.138555762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.373028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22840f71-c3ca-4199-9bb5-34cc6bab5140-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.373070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22840f71-c3ca-4199-9bb5-34cc6bab5140-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.373408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22840f71-c3ca-4199-9bb5-34cc6bab5140-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.373524 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jbsj\" (UniqueName: \"kubernetes.io/projected/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-kube-api-access-6jbsj\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.373593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.378570 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.390207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.395869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq69d\" (UniqueName: \"kubernetes.io/projected/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-kube-api-access-nq69d\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.398761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.405904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-serving-cert\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.418012 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.420911 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.424517 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.428788 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.432193 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e8b5315-44dc-4ece-bd4c-2accb8b466c6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lvnh\" (UID: \"6e8b5315-44dc-4ece-bd4c-2accb8b466c6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.438499 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.454402 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vk2pm"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.459483 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.474525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.474740 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:12.974703654 +0000 UTC m=+153.240314573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.474913 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.474964 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbfq4\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.474995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-srv-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64qp\" (UniqueName: \"kubernetes.io/projected/63db1dda-de3b-4fcc-aa34-812335a7700b-kube-api-access-x64qp\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-srv-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475128 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97lk7\" (UniqueName: \"kubernetes.io/projected/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-kube-api-access-97lk7\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475274 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db280f9-3aeb-4461-b688-25611e6b3694-config-volume\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475316 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kwq\" (UniqueName: \"kubernetes.io/projected/4c5f002a-45f3-4079-ac97-d583dfb50984-kube-api-access-r9kwq\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-node-bootstrap-token\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6bq\" (UniqueName: \"kubernetes.io/projected/ae617623-420a-436a-9ec3-3710fc1735fe-kube-api-access-8j6bq\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475448 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-profile-collector-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbrq\" (UniqueName: \"kubernetes.io/projected/7db280f9-3aeb-4461-b688-25611e6b3694-kube-api-access-2qbrq\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475571 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jspt\" (UniqueName: \"kubernetes.io/projected/f019b253-047e-4a21-9e54-52cdc5835d33-kube-api-access-9jspt\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475695 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63db1dda-de3b-4fcc-aa34-812335a7700b-cert\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqhh\" (UniqueName: \"kubernetes.io/projected/f678303a-5eda-4e0c-b70b-e91699765112-kube-api-access-mjqhh\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxtl\" (UniqueName: \"kubernetes.io/projected/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-kube-api-access-vjxtl\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c22f80e8-3539-42d9-b007-ba361b470232-signing-key\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475907 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gns8k\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-kube-api-access-gns8k\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2vc\" (UniqueName: \"kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.475956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-socket-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f5c004-b206-4144-9754-456af64c615e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5385719a-9a09-44ac-a1fb-0692c74f2bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476156 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2ct\" (UniqueName: \"kubernetes.io/projected/95865f35-2754-47c9-ac17-d8094427e2ea-kube-api-access-kh2ct\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476179 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6c5\" (UniqueName: \"kubernetes.io/projected/5385719a-9a09-44ac-a1fb-0692c74f2bdf-kube-api-access-tm6c5\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476204 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29f5c004-b206-4144-9754-456af64c615e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7db280f9-3aeb-4461-b688-25611e6b3694-metrics-tls\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/583f9d4c-ab28-4bbf-a346-116bdf6825cf-trusted-ca\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wpb\" (UniqueName: \"kubernetes.io/projected/c22f80e8-3539-42d9-b007-ba361b470232-kube-api-access-w6wpb\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476389 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476434 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5f002a-45f3-4079-ac97-d583dfb50984-config\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476467 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-certs\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476492 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5f002a-45f3-4079-ac97-d583dfb50984-serving-cert\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476516 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f5c004-b206-4144-9754-456af64c615e-config\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-plugins-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-auth-proxy-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476693 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5385719a-9a09-44ac-a1fb-0692c74f2bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476735 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476765 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c22f80e8-3539-42d9-b007-ba361b470232-signing-cabundle\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-mountpoint-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.476941 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdnck\" (UniqueName: \"kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.477000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22840f71-c3ca-4199-9bb5-34cc6bab5140-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.477040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22840f71-c3ca-4199-9bb5-34cc6bab5140-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.477066 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmz27\" (UniqueName: \"kubernetes.io/projected/f5388af9-a696-4215-844e-bbafcd37b2ec-kube-api-access-kmz27\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.477790 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f678303a-5eda-4e0c-b70b-e91699765112-machine-approver-tls\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-csi-data-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478279 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22840f71-c3ca-4199-9bb5-34cc6bab5140-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jbsj\" (UniqueName: \"kubernetes.io/projected/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-kube-api-access-6jbsj\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478324 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xx2d\" (UniqueName: \"kubernetes.io/projected/f0cdcf15-ce40-4e82-b45b-8265e256c319-kube-api-access-4xx2d\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478347 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlr4\" (UniqueName: \"kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478371 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-registration-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhwsj\" (UniqueName: \"kubernetes.io/projected/cf0c0c98-dc78-4aa8-aefe-df4d889d2582-kube-api-access-bhwsj\") pod \"migrator-59844c95c7-w7f9v\" (UID: \"cf0c0c98-dc78-4aa8-aefe-df4d889d2582\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478420 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5388af9-a696-4215-844e-bbafcd37b2ec-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.478468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/583f9d4c-ab28-4bbf-a346-116bdf6825cf-metrics-tls\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.480054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.481446 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f5c004-b206-4144-9754-456af64c615e-config\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.482943 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:12.982924086 +0000 UTC m=+153.248535185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.483178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.483322 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.483935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.484137 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.484580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.484755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22840f71-c3ca-4199-9bb5-34cc6bab5140-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.485076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-audit\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.485113 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.485814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.486061 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.487034 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f5c004-b206-4144-9754-456af64c615e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.488740 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.490223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22840f71-c3ca-4199-9bb5-34cc6bab5140-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.490507 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: W0218 16:32:12.493488 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1299fd10_7a60_463b_a99d_a0674c3741f1.slice/crio-7171ecd7c0f12132e3cfd2b6f6202f78a4a96d39f7c4067e27719ac632887d8f WatchSource:0}: Error finding container 7171ecd7c0f12132e3cfd2b6f6202f78a4a96d39f7c4067e27719ac632887d8f: Status 404 returned error can't find the container with id 7171ecd7c0f12132e3cfd2b6f6202f78a4a96d39f7c4067e27719ac632887d8f Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.495523 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.506035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" event={"ID":"677e33bb-1571-4051-bbe6-64dfc16f4520","Type":"ContainerStarted","Data":"f396373b4c81d2c2738ffd611b51f3186ab2dc450a2f8315f7abe78d12832094"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.506261 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.506449 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.509671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7f6a188-11db-48fd-b4e0-58abfe97aa07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.525646 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.526314 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.533303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" event={"ID":"fd51434e-e723-41f3-9885-4edee69d2537","Type":"ContainerStarted","Data":"b0453682a72e0cf64181cd2b10031d66cd17ddc504eb3b07e1e856f4bbabfc9d"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.533597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" event={"ID":"fd51434e-e723-41f3-9885-4edee69d2537","Type":"ContainerStarted","Data":"6f1c68fe4cb0f7fd3ab44b163aa4cd8c267bedb907f97d6c1ca7b739759f2f1a"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.533682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-blqkx" event={"ID":"2ee898c2-0a23-41cb-a680-709b6e8104ff","Type":"ContainerStarted","Data":"b8c5d5c4080b200667b461f179a09dd4ad91aeaab254d2e5de209a5214b035aa"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.533513 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.534906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" event={"ID":"d58bf47e-907b-42f2-89b0-919ee60b253e","Type":"ContainerStarted","Data":"e101ba37e31ad106b563b9db08db261cf3223d97f0df68e5573dcb6bb775350b"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.537123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" event={"ID":"9123084a-7c6e-463f-b006-ac02cc61c7b9","Type":"ContainerStarted","Data":"b5d8cdf885572594c485a24560a086fed759b962366c0b1871704c2842cc118b"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.537204 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" event={"ID":"9123084a-7c6e-463f-b006-ac02cc61c7b9","Type":"ContainerStarted","Data":"9084107c375935391a4655ec435f95201ee009614d2848b39fcb63777a1f8487"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.538194 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.542053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" event={"ID":"e3cb5421-061d-41dc-a07d-1102a60ac54f","Type":"ContainerStarted","Data":"8ff7c2b5f46a637ed3e8688d9623656c734800ca2d9110a45b5530fc07955370"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.542081 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" event={"ID":"e3cb5421-061d-41dc-a07d-1102a60ac54f","Type":"ContainerStarted","Data":"b5e9b64df8041bfe737eae09a00a746d5f9f09a1ae164af5d44e60633aeedb0a"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.543002 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.544472 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-zqqrs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.544506 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" podUID="e3cb5421-061d-41dc-a07d-1102a60ac54f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.545477 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7f6a188-11db-48fd-b4e0-58abfe97aa07-encryption-config\") pod \"apiserver-76f77b778f-xrhdr\" (UID: \"d7f6a188-11db-48fd-b4e0-58abfe97aa07\") " pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.547850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" event={"ID":"90f40305-18d9-499e-90db-aed66391bcf0","Type":"ContainerStarted","Data":"46672575b6305914a5b0fb9772b1cb4b2ab289675327ddda6ae1c2c4eb1a985a"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.549670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" event={"ID":"73cc9692-bbfe-48a7-865b-c2b4ac637527","Type":"ContainerStarted","Data":"2b746e80dfc7ee7eccfd3a268a66f1d4cd91866b906337d81e811acd1f85d3a3"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.554013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" event={"ID":"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2","Type":"ContainerStarted","Data":"34b642dae474828ae410a3dba2dc5bb1ad49cab46bd87365f3757ffa5e66a05a"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.554953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xs668" event={"ID":"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b","Type":"ContainerStarted","Data":"f469ca0d2299949fbf04756a8c34801e7db5c0ad0639a5c05a62034e2daeed60"} Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.579923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97lk7\" (UniqueName: \"kubernetes.io/projected/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-kube-api-access-97lk7\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db280f9-3aeb-4461-b688-25611e6b3694-config-volume\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580397 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kwq\" (UniqueName: \"kubernetes.io/projected/4c5f002a-45f3-4079-ac97-d583dfb50984-kube-api-access-r9kwq\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-node-bootstrap-token\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6bq\" (UniqueName: \"kubernetes.io/projected/ae617623-420a-436a-9ec3-3710fc1735fe-kube-api-access-8j6bq\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-profile-collector-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580521 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qbrq\" (UniqueName: \"kubernetes.io/projected/7db280f9-3aeb-4461-b688-25611e6b3694-kube-api-access-2qbrq\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580562 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jspt\" (UniqueName: \"kubernetes.io/projected/f019b253-047e-4a21-9e54-52cdc5835d33-kube-api-access-9jspt\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63db1dda-de3b-4fcc-aa34-812335a7700b-cert\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqhh\" (UniqueName: \"kubernetes.io/projected/f678303a-5eda-4e0c-b70b-e91699765112-kube-api-access-mjqhh\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxtl\" (UniqueName: \"kubernetes.io/projected/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-kube-api-access-vjxtl\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c22f80e8-3539-42d9-b007-ba361b470232-signing-key\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580685 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gns8k\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-kube-api-access-gns8k\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2vc\" (UniqueName: \"kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-socket-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5385719a-9a09-44ac-a1fb-0692c74f2bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580797 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh2ct\" (UniqueName: \"kubernetes.io/projected/95865f35-2754-47c9-ac17-d8094427e2ea-kube-api-access-kh2ct\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6c5\" (UniqueName: \"kubernetes.io/projected/5385719a-9a09-44ac-a1fb-0692c74f2bdf-kube-api-access-tm6c5\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7db280f9-3aeb-4461-b688-25611e6b3694-metrics-tls\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/583f9d4c-ab28-4bbf-a346-116bdf6825cf-trusted-ca\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wpb\" (UniqueName: \"kubernetes.io/projected/c22f80e8-3539-42d9-b007-ba361b470232-kube-api-access-w6wpb\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5f002a-45f3-4079-ac97-d583dfb50984-config\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580947 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-certs\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580964 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5f002a-45f3-4079-ac97-d583dfb50984-serving-cert\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.580985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-plugins-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-auth-proxy-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5385719a-9a09-44ac-a1fb-0692c74f2bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581061 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581112 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c22f80e8-3539-42d9-b007-ba361b470232-signing-cabundle\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-mountpoint-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581149 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmz27\" (UniqueName: \"kubernetes.io/projected/f5388af9-a696-4215-844e-bbafcd37b2ec-kube-api-access-kmz27\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f678303a-5eda-4e0c-b70b-e91699765112-machine-approver-tls\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-csi-data-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581247 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xx2d\" (UniqueName: \"kubernetes.io/projected/f0cdcf15-ce40-4e82-b45b-8265e256c319-kube-api-access-4xx2d\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tlr4\" (UniqueName: \"kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581282 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-registration-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhwsj\" (UniqueName: \"kubernetes.io/projected/cf0c0c98-dc78-4aa8-aefe-df4d889d2582-kube-api-access-bhwsj\") pod \"migrator-59844c95c7-w7f9v\" (UID: \"cf0c0c98-dc78-4aa8-aefe-df4d889d2582\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581322 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5388af9-a696-4215-844e-bbafcd37b2ec-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/583f9d4c-ab28-4bbf-a346-116bdf6825cf-metrics-tls\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-srv-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581395 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x64qp\" (UniqueName: \"kubernetes.io/projected/63db1dda-de3b-4fcc-aa34-812335a7700b-kube-api-access-x64qp\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581414 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-srv-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.581794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db280f9-3aeb-4461-b688-25611e6b3694-config-volume\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.582325 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.582453 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.082427527 +0000 UTC m=+153.348038586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.582925 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-csi-data-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.583368 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.583383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-registration-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.584004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.584081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-plugins-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.584756 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f678303a-5eda-4e0c-b70b-e91699765112-auth-proxy-config\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.585380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5385719a-9a09-44ac-a1fb-0692c74f2bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.586644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-profile-collector-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.587124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-socket-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.588348 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c22f80e8-3539-42d9-b007-ba361b470232-signing-key\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.589411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae617623-420a-436a-9ec3-3710fc1735fe-srv-cert\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.589487 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f019b253-047e-4a21-9e54-52cdc5835d33-mountpoint-dir\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.592973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.594530 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5f002a-45f3-4079-ac97-d583dfb50984-config\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.595176 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c22f80e8-3539-42d9-b007-ba361b470232-signing-cabundle\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.596222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.597789 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.597832 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7db280f9-3aeb-4461-b688-25611e6b3694-metrics-tls\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.599614 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/583f9d4c-ab28-4bbf-a346-116bdf6825cf-trusted-ca\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.600658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.605865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5388af9-a696-4215-844e-bbafcd37b2ec-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.606014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbfq4\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.607942 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5f002a-45f3-4079-ac97-d583dfb50984-serving-cert\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.608466 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f678303a-5eda-4e0c-b70b-e91699765112-machine-approver-tls\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.608849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-node-bootstrap-token\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.610242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5385719a-9a09-44ac-a1fb-0692c74f2bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.613652 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0cdcf15-ce40-4e82-b45b-8265e256c319-srv-cert\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.613774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63db1dda-de3b-4fcc-aa34-812335a7700b-cert\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.613864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/583f9d4c-ab28-4bbf-a346-116bdf6825cf-metrics-tls\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.614006 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.618694 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95865f35-2754-47c9-ac17-d8094427e2ea-certs\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.625806 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdnck\" (UniqueName: \"kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck\") pod \"controller-manager-879f6c89f-t65dk\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.632699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29f5c004-b206-4144-9754-456af64c615e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9g4x\" (UID: \"29f5c004-b206-4144-9754-456af64c615e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.663530 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jbsj\" (UniqueName: \"kubernetes.io/projected/2d3ca68f-4b91-4a81-b36c-d25fb10d00d4-kube-api-access-6jbsj\") pod \"kube-storage-version-migrator-operator-b67b599dd-2j89k\" (UID: \"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.675541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22840f71-c3ca-4199-9bb5-34cc6bab5140-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lpl58\" (UID: \"22840f71-c3ca-4199-9bb5-34cc6bab5140\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.683419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.685280 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.185254013 +0000 UTC m=+153.450865102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.701886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.735210 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qbrq\" (UniqueName: \"kubernetes.io/projected/7db280f9-3aeb-4461-b688-25611e6b3694-kube-api-access-2qbrq\") pod \"dns-default-kjggh\" (UID: \"7db280f9-3aeb-4461-b688-25611e6b3694\") " pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.759294 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kwq\" (UniqueName: \"kubernetes.io/projected/4c5f002a-45f3-4079-ac97-d583dfb50984-kube-api-access-r9kwq\") pod \"service-ca-operator-777779d784-flhsn\" (UID: \"4c5f002a-45f3-4079-ac97-d583dfb50984\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.769619 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.784513 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97lk7\" (UniqueName: \"kubernetes.io/projected/95d36ab6-23d1-4fa5-ba03-ece8e724b74a-kube-api-access-97lk7\") pod \"package-server-manager-789f6589d5-rk27v\" (UID: \"95d36ab6-23d1-4fa5-ba03-ece8e724b74a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.785755 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.785963 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.28592113 +0000 UTC m=+153.551532039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.786491 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.787423 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.287386063 +0000 UTC m=+153.552996972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.791930 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.808136 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t2dg5"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.808876 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.811362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6bq\" (UniqueName: \"kubernetes.io/projected/ae617623-420a-436a-9ec3-3710fc1735fe-kube-api-access-8j6bq\") pod \"catalog-operator-68c6474976-trkrb\" (UID: \"ae617623-420a-436a-9ec3-3710fc1735fe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.820113 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.820418 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xx2d\" (UniqueName: \"kubernetes.io/projected/f0cdcf15-ce40-4e82-b45b-8265e256c319-kube-api-access-4xx2d\") pod \"olm-operator-6b444d44fb-jtzmh\" (UID: \"f0cdcf15-ce40-4e82-b45b-8265e256c319\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.834685 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.837245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tlr4\" (UniqueName: \"kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4\") pod \"collect-profiles-29523870-4nfr4\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.860585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.862916 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhwsj\" (UniqueName: \"kubernetes.io/projected/cf0c0c98-dc78-4aa8-aefe-df4d889d2582-kube-api-access-bhwsj\") pod \"migrator-59844c95c7-w7f9v\" (UID: \"cf0c0c98-dc78-4aa8-aefe-df4d889d2582\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.875260 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.883063 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.887667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.887927 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.387889067 +0000 UTC m=+153.653499976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.888180 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.888653 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.388634893 +0000 UTC m=+153.654245802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.890684 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.894968 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqhh\" (UniqueName: \"kubernetes.io/projected/f678303a-5eda-4e0c-b70b-e91699765112-kube-api-access-mjqhh\") pod \"machine-approver-56656f9798-xbxtf\" (UID: \"f678303a-5eda-4e0c-b70b-e91699765112\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.898964 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.905230 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx6ch"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.908248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxtl\" (UniqueName: \"kubernetes.io/projected/a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d-kube-api-access-vjxtl\") pod \"control-plane-machine-set-operator-78cbb6b69f-l4krn\" (UID: \"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.912445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.914428 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.940243 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gns8k\" (UniqueName: \"kubernetes.io/projected/583f9d4c-ab28-4bbf-a346-116bdf6825cf-kube-api-access-gns8k\") pod \"ingress-operator-5b745b69d9-lsjp8\" (UID: \"583f9d4c-ab28-4bbf-a346-116bdf6825cf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.949944 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.950624 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r748z"] Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.954614 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2vc\" (UniqueName: \"kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc\") pod \"marketplace-operator-79b997595-tsgtb\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.959639 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.989715 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.990409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh2ct\" (UniqueName: \"kubernetes.io/projected/95865f35-2754-47c9-ac17-d8094427e2ea-kube-api-access-kh2ct\") pod \"machine-config-server-bwprw\" (UID: \"95865f35-2754-47c9-ac17-d8094427e2ea\") " pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:12 crc kubenswrapper[4812]: I0218 16:32:12.990603 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:12 crc kubenswrapper[4812]: E0218 16:32:12.991032 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.491012289 +0000 UTC m=+153.756623198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:12 crc kubenswrapper[4812]: W0218 16:32:12.993652 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3e79a06_f5d4_407d_b601_8385a4d9c32e.slice/crio-8401b385fc9e4911b428297f2cf979a3605b3d4aae8210393361a2c0e52ef939 WatchSource:0}: Error finding container 8401b385fc9e4911b428297f2cf979a3605b3d4aae8210393361a2c0e52ef939: Status 404 returned error can't find the container with id 8401b385fc9e4911b428297f2cf979a3605b3d4aae8210393361a2c0e52ef939 Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.005597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6c5\" (UniqueName: \"kubernetes.io/projected/5385719a-9a09-44ac-a1fb-0692c74f2bdf-kube-api-access-tm6c5\") pod \"machine-config-controller-84d6567774-tlq87\" (UID: \"5385719a-9a09-44ac-a1fb-0692c74f2bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.028368 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x64qp\" (UniqueName: \"kubernetes.io/projected/63db1dda-de3b-4fcc-aa34-812335a7700b-kube-api-access-x64qp\") pod \"ingress-canary-2kptm\" (UID: \"63db1dda-de3b-4fcc-aa34-812335a7700b\") " pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.064297 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jspt\" (UniqueName: \"kubernetes.io/projected/f019b253-047e-4a21-9e54-52cdc5835d33-kube-api-access-9jspt\") pod \"csi-hostpathplugin-9lm7j\" (UID: \"f019b253-047e-4a21-9e54-52cdc5835d33\") " pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.090597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wpb\" (UniqueName: \"kubernetes.io/projected/c22f80e8-3539-42d9-b007-ba361b470232-kube-api-access-w6wpb\") pod \"service-ca-9c57cc56f-d2crn\" (UID: \"c22f80e8-3539-42d9-b007-ba361b470232\") " pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.092177 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.092641 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.592621497 +0000 UTC m=+153.858232406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.099230 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmz27\" (UniqueName: \"kubernetes.io/projected/f5388af9-a696-4215-844e-bbafcd37b2ec-kube-api-access-kmz27\") pod \"multus-admission-controller-857f4d67dd-hzqj7\" (UID: \"f5388af9-a696-4215-844e-bbafcd37b2ec\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.185039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.193808 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.194152 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.694131233 +0000 UTC m=+153.959742142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.211003 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.227656 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.232262 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.241465 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.256495 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.265528 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bwprw" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.282994 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2kptm" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.297307 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-chx4h"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.298724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.299199 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.799181118 +0000 UTC m=+154.064792027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.312578 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.331920 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.341584 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xrhdr"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.343414 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84npp"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.409642 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.410170 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:13.910144432 +0000 UTC m=+154.175755341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: W0218 16:32:13.466603 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f6a188_11db_48fd_b4e0_58abfe97aa07.slice/crio-770c95097452d83f240a822b162e689ff1bfcf0d82262cc47d51ad881578e9ca WatchSource:0}: Error finding container 770c95097452d83f240a822b162e689ff1bfcf0d82262cc47d51ad881578e9ca: Status 404 returned error can't find the container with id 770c95097452d83f240a822b162e689ff1bfcf0d82262cc47d51ad881578e9ca Feb 18 16:32:13 crc kubenswrapper[4812]: W0218 16:32:13.470416 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc05670_962b_48fc_a2c3_ad79a606f32c.slice/crio-ac35b4f271fc0faa3ee75569c6ab3f4f2479874fa7e2049f2e90a6f9137547bb WatchSource:0}: Error finding container ac35b4f271fc0faa3ee75569c6ab3f4f2479874fa7e2049f2e90a6f9137547bb: Status 404 returned error can't find the container with id ac35b4f271fc0faa3ee75569c6ab3f4f2479874fa7e2049f2e90a6f9137547bb Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.511299 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.511715 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.011698099 +0000 UTC m=+154.277309008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.579415 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.586307 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwmt7" podStartSLOduration=132.586270102 podStartE2EDuration="2m12.586270102s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:13.57574341 +0000 UTC m=+153.841354319" watchObservedRunningTime="2026-02-18 16:32:13.586270102 +0000 UTC m=+153.851881011" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.603279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" event={"ID":"3e6ebf72-1d36-465d-9326-1923d28d5c28","Type":"ContainerStarted","Data":"2535b05fadf0aaa5459c9298c5dbf2e0b9def4ccbb2bc92c6840c404991f5b45"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.603334 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" event={"ID":"3e6ebf72-1d36-465d-9326-1923d28d5c28","Type":"ContainerStarted","Data":"6f52ded166d97163a37e4f9eb61018d2a418ee4dc5e699e484a0c1c5deecc222"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.604346 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.608904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" event={"ID":"1299fd10-7a60-463b-a99d-a0674c3741f1","Type":"ContainerStarted","Data":"40ca7d168cb64204b1fc9401fc91787a768c02fac8cf63691730adc6d8ddc86e"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.608942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" event={"ID":"1299fd10-7a60-463b-a99d-a0674c3741f1","Type":"ContainerStarted","Data":"7171ecd7c0f12132e3cfd2b6f6202f78a4a96d39f7c4067e27719ac632887d8f"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.610978 4812 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qsrdf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.611016 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" podUID="3e6ebf72-1d36-465d-9326-1923d28d5c28" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.612451 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-blqkx" event={"ID":"2ee898c2-0a23-41cb-a680-709b6e8104ff","Type":"ContainerStarted","Data":"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.613083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.613307 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.113263117 +0000 UTC m=+154.378874016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.613468 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.613846 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.113831999 +0000 UTC m=+154.379442898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.640858 4812 generic.go:334] "Generic (PLEG): container finished" podID="9123084a-7c6e-463f-b006-ac02cc61c7b9" containerID="b5d8cdf885572594c485a24560a086fed759b962366c0b1871704c2842cc118b" exitCode=0 Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.640981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" event={"ID":"9123084a-7c6e-463f-b006-ac02cc61c7b9","Type":"ContainerDied","Data":"b5d8cdf885572594c485a24560a086fed759b962366c0b1871704c2842cc118b"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.669062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" event={"ID":"a3e79a06-f5d4-407d-b601-8385a4d9c32e","Type":"ContainerStarted","Data":"8401b385fc9e4911b428297f2cf979a3605b3d4aae8210393361a2c0e52ef939"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.715810 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.716159 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.216135793 +0000 UTC m=+154.481746702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.716281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.718261 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.21825272 +0000 UTC m=+154.483863619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.751359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" event={"ID":"d7f6a188-11db-48fd-b4e0-58abfe97aa07","Type":"ContainerStarted","Data":"770c95097452d83f240a822b162e689ff1bfcf0d82262cc47d51ad881578e9ca"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.790052 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" event={"ID":"70067b7f-0a79-444f-8041-8683d4ae95b2","Type":"ContainerStarted","Data":"61b5fbea5a972bfd3d2779c3c4187c3185deaf2ce259370e23a30e4cf28cc698"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.818022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.818564 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.318519449 +0000 UTC m=+154.584130358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.824530 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" event={"ID":"90f40305-18d9-499e-90db-aed66391bcf0","Type":"ContainerStarted","Data":"521a7de26208e0d4a514103b8b49a36459dc191bd206b1e6bb870c9632677eb6"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.832500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" event={"ID":"73cc9692-bbfe-48a7-865b-c2b4ac637527","Type":"ContainerStarted","Data":"5ddea55de796922772505a1bd960f40ef7c876bf75cd8510ece48824b0c8a70f"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.899054 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-flhsn"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.899579 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.899598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" event={"ID":"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2","Type":"ContainerStarted","Data":"b77f4b138cb3ecb46f6f35c1677ff50c7cc643b5342bbc797a3ea8bce79993d0"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.899625 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.900848 4812 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vt8tn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.901209 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.902594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-chx4h" event={"ID":"4428dc60-fd63-4b22-8589-08c8ac3dde08","Type":"ContainerStarted","Data":"ab64ff504cac1bb9933d82bdf6f4781fa2fbf48e719bf740a572e41f823d0264"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.910626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" event={"ID":"f678303a-5eda-4e0c-b70b-e91699765112","Type":"ContainerStarted","Data":"f29e32c99647074728ec0933d0c733a9376b373614c70645d840bd54ca0128fb"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.911671 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.916448 4812 generic.go:334] "Generic (PLEG): container finished" podID="d58bf47e-907b-42f2-89b0-919ee60b253e" containerID="52a910f5bae02b7122c6316537d2ed90964c5296ea155f533795ba70a8211b70" exitCode=0 Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.916560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" event={"ID":"d58bf47e-907b-42f2-89b0-919ee60b253e","Type":"ContainerDied","Data":"52a910f5bae02b7122c6316537d2ed90964c5296ea155f533795ba70a8211b70"} Feb 18 16:32:13 crc kubenswrapper[4812]: E0218 16:32:13.924878 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.424848721 +0000 UTC m=+154.690459810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.932748 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.945296 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.948426 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58"] Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.948576 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.963198 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" event={"ID":"677e33bb-1571-4051-bbe6-64dfc16f4520","Type":"ContainerStarted","Data":"0d1fc7dc411036a7d1f3d528947f2011f74de8d9cc6f332557830220c4299d75"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.963732 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.965347 4812 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pm7xx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.965402 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.985126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" event={"ID":"9cc05670-962b-48fc-a2c3-ad79a606f32c","Type":"ContainerStarted","Data":"ac35b4f271fc0faa3ee75569c6ab3f4f2479874fa7e2049f2e90a6f9137547bb"} Feb 18 16:32:13 crc kubenswrapper[4812]: I0218 16:32:13.987632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" event={"ID":"6e8b5315-44dc-4ece-bd4c-2accb8b466c6","Type":"ContainerStarted","Data":"75f5a0943f723fbfd7ecf143e0f50dc42fe49f7e9b52b7fca28e45d59bc33c09"} Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.001632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xs668" event={"ID":"583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b","Type":"ContainerStarted","Data":"5f8d5ddd092a8cc8fe8915daa42e6e31e3fb7d9e887dfa6aec1b51083d34ecde"} Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.017352 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.019658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" event={"ID":"7f8a50e5-17af-449c-9e9f-ff051ba9c99f","Type":"ContainerStarted","Data":"68cafa847a3f388949235c5badef9081468dac3401dc4ea28adf0420d8db90e3"} Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.019702 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" event={"ID":"7f8a50e5-17af-449c-9e9f-ff051ba9c99f","Type":"ContainerStarted","Data":"031cfa07e203015205971a499dd709589de10e33edffc3a4a3a434173e8b060d"} Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.031460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" event={"ID":"f78ac4e9-599f-466f-ad93-2e945ea78dc9","Type":"ContainerStarted","Data":"7bc4d9660af3a031ce01d725b1ea1e1b1360eadd843794d057bf5d5acddd70b5"} Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.032288 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-zqqrs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.032332 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" podUID="e3cb5421-061d-41dc-a07d-1102a60ac54f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.042793 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.082206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.084175 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.58413818 +0000 UTC m=+154.849749089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.086568 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.104250 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.107143 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9lm7j"] Feb 18 16:32:14 crc kubenswrapper[4812]: W0218 16:32:14.136347 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf0c0c98_dc78_4aa8_aefe_df4d889d2582.slice/crio-a5b9831d5be4fa2166e2aa477740456de42ba6f3f35cff90880fc3885103dc1e WatchSource:0}: Error finding container a5b9831d5be4fa2166e2aa477740456de42ba6f3f35cff90880fc3885103dc1e: Status 404 returned error can't find the container with id a5b9831d5be4fa2166e2aa477740456de42ba6f3f35cff90880fc3885103dc1e Feb 18 16:32:14 crc kubenswrapper[4812]: W0218 16:32:14.138518 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae617623_420a_436a_9ec3_3710fc1735fe.slice/crio-b1f6725fdba68598eb434cc995155ada924062042a589c9a563895dab7a06406 WatchSource:0}: Error finding container b1f6725fdba68598eb434cc995155ada924062042a589c9a563895dab7a06406: Status 404 returned error can't find the container with id b1f6725fdba68598eb434cc995155ada924062042a589c9a563895dab7a06406 Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.138551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.141215 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kjggh"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.148927 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d2crn"] Feb 18 16:32:14 crc kubenswrapper[4812]: W0218 16:32:14.171290 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f5c004_b206_4144_9754_456af64c615e.slice/crio-f08899484abd06949519941aac5f01926cbc890374e529c72af688924e8a233d WatchSource:0}: Error finding container f08899484abd06949519941aac5f01926cbc890374e529c72af688924e8a233d: Status 404 returned error can't find the container with id f08899484abd06949519941aac5f01926cbc890374e529c72af688924e8a233d Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.184213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.184556 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.684543842 +0000 UTC m=+154.950154751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: W0218 16:32:14.186320 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5385719a_9a09_44ac_a1fb_0692c74f2bdf.slice/crio-9ced4d6a63033720585ca1899ee48a5b1796705935e0de31a261c92a4111a001 WatchSource:0}: Error finding container 9ced4d6a63033720585ca1899ee48a5b1796705935e0de31a261c92a4111a001: Status 404 returned error can't find the container with id 9ced4d6a63033720585ca1899ee48a5b1796705935e0de31a261c92a4111a001 Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.209884 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.219259 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.219314 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.250971 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2kptm"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.285140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.285623 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.785599008 +0000 UTC m=+155.051209917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.366165 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.387286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.387898 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.887862931 +0000 UTC m=+155.153473840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.401924 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hzqj7"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.406152 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8"] Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.488786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.489021 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.988979189 +0000 UTC m=+155.254590098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.489766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.490283 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:14.990274637 +0000 UTC m=+155.255885546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.591792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.592026 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.091995838 +0000 UTC m=+155.357606757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.592680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.593287 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.093276296 +0000 UTC m=+155.358887205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.694917 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" podStartSLOduration=133.694887935 podStartE2EDuration="2m13.694887935s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:14.662634134 +0000 UTC m=+154.928245043" watchObservedRunningTime="2026-02-18 16:32:14.694887935 +0000 UTC m=+154.960498854" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.696524 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.696956 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.19693988 +0000 UTC m=+155.462550789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.797842 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.798341 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.298322854 +0000 UTC m=+155.563933773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.820573 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vgpcj" podStartSLOduration=133.820541943 podStartE2EDuration="2m13.820541943s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:14.817524627 +0000 UTC m=+155.083135536" watchObservedRunningTime="2026-02-18 16:32:14.820541943 +0000 UTC m=+155.086152852" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.902650 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:14 crc kubenswrapper[4812]: E0218 16:32:14.903356 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.403327467 +0000 UTC m=+155.668938376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.939885 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-blqkx" podStartSLOduration=133.939835741 podStartE2EDuration="2m13.939835741s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:14.903984311 +0000 UTC m=+155.169595240" watchObservedRunningTime="2026-02-18 16:32:14.939835741 +0000 UTC m=+155.205446660" Feb 18 16:32:14 crc kubenswrapper[4812]: I0218 16:32:14.940709 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xndx9" podStartSLOduration=133.94070153 podStartE2EDuration="2m13.94070153s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:14.933718126 +0000 UTC m=+155.199329045" watchObservedRunningTime="2026-02-18 16:32:14.94070153 +0000 UTC m=+155.206312449" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.005059 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.006041 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.506024169 +0000 UTC m=+155.771635078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.020086 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" podStartSLOduration=135.020056588 podStartE2EDuration="2m15.020056588s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.018810131 +0000 UTC m=+155.284421040" watchObservedRunningTime="2026-02-18 16:32:15.020056588 +0000 UTC m=+155.285667497" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.064550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" event={"ID":"583f9d4c-ab28-4bbf-a346-116bdf6825cf","Type":"ContainerStarted","Data":"3c2bb7118c6fd6d25bf7f8cc71671a756dfc4781c5f9f01e3b76f08b549af2ab"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.072456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" event={"ID":"4c5f002a-45f3-4079-ac97-d583dfb50984","Type":"ContainerStarted","Data":"d04d597b46d8ad7a14f261bec12f306c8f64a474ade4f891ac31785a367f3934"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.076710 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bwprw" event={"ID":"95865f35-2754-47c9-ac17-d8094427e2ea","Type":"ContainerStarted","Data":"ff9ee7529216ed41498be0601b8579b4ac6047312e13de10cde7cb1d5974095e"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.077175 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bwprw" event={"ID":"95865f35-2754-47c9-ac17-d8094427e2ea","Type":"ContainerStarted","Data":"be7bc619118db045455d3b8d67ac287ae373247473247a9dce91b532152cd7a5"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.106012 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.107168 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.607117886 +0000 UTC m=+155.872728945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.123443 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" event={"ID":"f5388af9-a696-4215-844e-bbafcd37b2ec","Type":"ContainerStarted","Data":"f9c30c18430d3404b6655c9ba66976d6202cb0ba2ad9a188ae3b18b6048d79e7"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.145714 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xs668" podStartSLOduration=134.145683246 podStartE2EDuration="2m14.145683246s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.058164988 +0000 UTC m=+155.323775917" watchObservedRunningTime="2026-02-18 16:32:15.145683246 +0000 UTC m=+155.411294155" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.165538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" event={"ID":"b00e87af-1e21-4c4b-ae20-9da5de7e8176","Type":"ContainerStarted","Data":"51c1df9c19edb83ea83b98af29550a1af4e0f286db93e3740b5c891f12935ae1"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.171427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" event={"ID":"f019b253-047e-4a21-9e54-52cdc5835d33","Type":"ContainerStarted","Data":"ea09a43839729706aff3b22043703af879103a539a2cd98d99bd7055eb5fd58d"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.208046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.209123 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.709089413 +0000 UTC m=+155.974700312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.215534 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:15 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:15 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:15 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.215622 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.309935 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.310447 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.810420185 +0000 UTC m=+156.076031094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.380486 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" event={"ID":"9cc05670-962b-48fc-a2c3-ad79a606f32c","Type":"ContainerStarted","Data":"eb9389ed47f511aba8015a71fb633a4e17472cfb2d93ceb62634ab5ce11dc528"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.403703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" event={"ID":"95d36ab6-23d1-4fa5-ba03-ece8e724b74a","Type":"ContainerStarted","Data":"0a03e28bfaf7d270d2052b43e38de78bef38913311b06daf13be301b544aab80"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.404266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" event={"ID":"95d36ab6-23d1-4fa5-ba03-ece8e724b74a","Type":"ContainerStarted","Data":"5bc34174162c3433c4d843cc0e9b143b3d5ad218674ddb7665b2d9a5f32a1358"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.411782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.412273 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:15.912253549 +0000 UTC m=+156.177864458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.486653 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" podStartSLOduration=134.486630047 podStartE2EDuration="2m14.486630047s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.485244877 +0000 UTC m=+155.750855786" watchObservedRunningTime="2026-02-18 16:32:15.486630047 +0000 UTC m=+155.752240956" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.491869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" event={"ID":"70067b7f-0a79-444f-8041-8683d4ae95b2","Type":"ContainerStarted","Data":"f8cd2c28a54841a3ee06b6aeb7a76eba69d46ca75f28164a9b91346772d7701f"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.512460 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.512794 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.012779973 +0000 UTC m=+156.278390882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.523960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" event={"ID":"9123084a-7c6e-463f-b006-ac02cc61c7b9","Type":"ContainerStarted","Data":"f2382e02a95301d4a8d262a5ce0a6808cf0d5c8784dcd1cd1c3914c06d015140"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.524742 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.527700 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vk2pm" podStartSLOduration=135.527688462 podStartE2EDuration="2m15.527688462s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.525256418 +0000 UTC m=+155.790867337" watchObservedRunningTime="2026-02-18 16:32:15.527688462 +0000 UTC m=+155.793299361" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.542159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kjggh" event={"ID":"7db280f9-3aeb-4461-b688-25611e6b3694","Type":"ContainerStarted","Data":"401f047a7fec33838161aefb8610571cf7d049b79fd76da1fd180408b4a9ae0b"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.616742 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.617145 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.117131042 +0000 UTC m=+156.382741941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.619486 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" event={"ID":"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4","Type":"ContainerStarted","Data":"bb6c1a30b7cd75153c93c3eba8339b1da973bb345fd8d67215cd6e5c3d407e01"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.619565 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" event={"ID":"2d3ca68f-4b91-4a81-b36c-d25fb10d00d4","Type":"ContainerStarted","Data":"00e0e99c14c1cd0e21f534105737b527281d4e95a2f9408b5f9a796cba1d6a87"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.648527 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" event={"ID":"c22f80e8-3539-42d9-b007-ba361b470232","Type":"ContainerStarted","Data":"795113eaaf41b45b78943b56234b93065ef7e4219d558077b5ec246212dc17aa"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.648605 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" event={"ID":"c22f80e8-3539-42d9-b007-ba361b470232","Type":"ContainerStarted","Data":"ce36f296a346def56159b7f0d045f901a3a3432c628a34b709009880fe873888"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.695755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" event={"ID":"6e8b5315-44dc-4ece-bd4c-2accb8b466c6","Type":"ContainerStarted","Data":"e17aa80d42dbf4c13b0620432adc2b110f8efe142d1b462b7115130574bccb67"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.705533 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" podStartSLOduration=134.705511858 podStartE2EDuration="2m14.705511858s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.704899765 +0000 UTC m=+155.970510674" watchObservedRunningTime="2026-02-18 16:32:15.705511858 +0000 UTC m=+155.971122767" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.720136 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.722005 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.221983401 +0000 UTC m=+156.487594310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.734114 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" event={"ID":"cf0c0c98-dc78-4aa8-aefe-df4d889d2582","Type":"ContainerStarted","Data":"4e29ba3d2ef3bec201ba1206c05880e1454d2e95f3db3275c187b0e2bc528e52"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.734169 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" event={"ID":"cf0c0c98-dc78-4aa8-aefe-df4d889d2582","Type":"ContainerStarted","Data":"a5b9831d5be4fa2166e2aa477740456de42ba6f3f35cff90880fc3885103dc1e"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.768752 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" event={"ID":"f0cdcf15-ce40-4e82-b45b-8265e256c319","Type":"ContainerStarted","Data":"82f21bdd0ff1c13fdbaf873c75f905d208a24261dcd6687e8f936fd327d5f468"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.769939 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.788770 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" podStartSLOduration=134.788750942 podStartE2EDuration="2m14.788750942s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.785257745 +0000 UTC m=+156.050868654" watchObservedRunningTime="2026-02-18 16:32:15.788750942 +0000 UTC m=+156.054361851" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.790422 4812 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jtzmh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.790502 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" podUID="f0cdcf15-ce40-4e82-b45b-8265e256c319" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.796319 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" event={"ID":"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d","Type":"ContainerStarted","Data":"0a2874fe1590501e1f9f37b66623eb889dbe35c85365de93ac1b95850924befd"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.809397 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-d2crn" podStartSLOduration=134.809372656 podStartE2EDuration="2m14.809372656s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.808381514 +0000 UTC m=+156.073992423" watchObservedRunningTime="2026-02-18 16:32:15.809372656 +0000 UTC m=+156.074983565" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.811146 4812 generic.go:334] "Generic (PLEG): container finished" podID="d7f6a188-11db-48fd-b4e0-58abfe97aa07" containerID="c9d417bb8290d24c11873a90ddb82208c4246441e3fd85b4ebf64984d6bc2383" exitCode=0 Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.811291 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" event={"ID":"d7f6a188-11db-48fd-b4e0-58abfe97aa07","Type":"ContainerDied","Data":"c9d417bb8290d24c11873a90ddb82208c4246441e3fd85b4ebf64984d6bc2383"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.825196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.825670 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.325648145 +0000 UTC m=+156.591259044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.848413 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" event={"ID":"a3e79a06-f5d4-407d-b601-8385a4d9c32e","Type":"ContainerStarted","Data":"9ce5222fd9c63b451561b2ba1d6ff1e85a20eeabb486cf3e35eeae04f26958a2"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.872564 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lvnh" podStartSLOduration=135.872539738 podStartE2EDuration="2m15.872539738s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.869787737 +0000 UTC m=+156.135398666" watchObservedRunningTime="2026-02-18 16:32:15.872539738 +0000 UTC m=+156.138150647" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.931055 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2j89k" podStartSLOduration=134.931028776 podStartE2EDuration="2m14.931028776s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.930554836 +0000 UTC m=+156.196165745" watchObservedRunningTime="2026-02-18 16:32:15.931028776 +0000 UTC m=+156.196639685" Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.931297 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:15 crc kubenswrapper[4812]: E0218 16:32:15.936763 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.436730472 +0000 UTC m=+156.702341381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.965512 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" event={"ID":"7f8a50e5-17af-449c-9e9f-ff051ba9c99f","Type":"ContainerStarted","Data":"7d16ab604fab15ecd69be8c62c6411a43126e7e73d8a07b206dbe234f97426a4"} Feb 18 16:32:15 crc kubenswrapper[4812]: I0218 16:32:15.986345 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bwprw" podStartSLOduration=6.986318274 podStartE2EDuration="6.986318274s" podCreationTimestamp="2026-02-18 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:15.978970042 +0000 UTC m=+156.244580961" watchObservedRunningTime="2026-02-18 16:32:15.986318274 +0000 UTC m=+156.251929193" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:15.998870 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-chx4h" event={"ID":"4428dc60-fd63-4b22-8589-08c8ac3dde08","Type":"ContainerStarted","Data":"5e4c60fd88d1166611d5c17d173f6f942dc8c0556e465228fcdeba0ad83a5516"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:15.999750 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.020084 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" event={"ID":"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5","Type":"ContainerStarted","Data":"3c7b7b7b672f98b68ac9f11a9bba624c5cf1d3f90fcd816bcd2e5aa06b77b22f"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.020159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" event={"ID":"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5","Type":"ContainerStarted","Data":"8cf4d91dee56f64ad5c5f223f44a2333af4477bcdc5cb0b67970e38d44bdcbe3"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.021232 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.029530 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.029610 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.040836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.043057 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.543035894 +0000 UTC m=+156.808646823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.045460 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t65dk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.045552 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.065191 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2kptm" event={"ID":"63db1dda-de3b-4fcc-aa34-812335a7700b","Type":"ContainerStarted","Data":"560c1bd4804965fb1bf023eba908ac8e93d6a7ea41a9e520f16d8c9fce8d4d04"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.087172 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-r748z" podStartSLOduration=135.087134345 podStartE2EDuration="2m15.087134345s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.029949895 +0000 UTC m=+156.295560794" watchObservedRunningTime="2026-02-18 16:32:16.087134345 +0000 UTC m=+156.352745254" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.100840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" event={"ID":"5385719a-9a09-44ac-a1fb-0692c74f2bdf","Type":"ContainerStarted","Data":"9ced4d6a63033720585ca1899ee48a5b1796705935e0de31a261c92a4111a001"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.131693 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" event={"ID":"22840f71-c3ca-4199-9bb5-34cc6bab5140","Type":"ContainerStarted","Data":"b758f8d1ba446aadfd82c45fe046c8355edb59203658462b705c77a12b261fdc"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.144264 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.150697 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.650631564 +0000 UTC m=+156.916242473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.151996 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hx6ch" podStartSLOduration=135.151969923 podStartE2EDuration="2m15.151969923s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.087325109 +0000 UTC m=+156.352936018" watchObservedRunningTime="2026-02-18 16:32:16.151969923 +0000 UTC m=+156.417580832" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.153452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" event={"ID":"ae617623-420a-436a-9ec3-3710fc1735fe","Type":"ContainerStarted","Data":"b1f6725fdba68598eb434cc995155ada924062042a589c9a563895dab7a06406"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.153612 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.161810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.181407 4812 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-trkrb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.181487 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" podUID="ae617623-420a-436a-9ec3-3710fc1735fe" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.183454 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.683436227 +0000 UTC m=+156.949047136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.192905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" event={"ID":"f678303a-5eda-4e0c-b70b-e91699765112","Type":"ContainerStarted","Data":"bc832536e2f769761f299d7fa966f3bf0b3057b73e16c8ff296f79e0e7a4748f"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.223303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" event={"ID":"29f5c004-b206-4144-9754-456af64c615e","Type":"ContainerStarted","Data":"f08899484abd06949519941aac5f01926cbc890374e529c72af688924e8a233d"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.226291 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" podStartSLOduration=135.22626147 podStartE2EDuration="2m15.22626147s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.224209215 +0000 UTC m=+156.489820124" watchObservedRunningTime="2026-02-18 16:32:16.22626147 +0000 UTC m=+156.491872369" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.227672 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:16 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:16 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:16 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.227744 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.276716 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" podStartSLOduration=135.276692481 podStartE2EDuration="2m15.276692481s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.274685677 +0000 UTC m=+156.540296596" watchObservedRunningTime="2026-02-18 16:32:16.276692481 +0000 UTC m=+156.542303390" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.293046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.293598 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.793575563 +0000 UTC m=+157.059186462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.293725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.295897 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.795872474 +0000 UTC m=+157.061483513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.338646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" event={"ID":"f78ac4e9-599f-466f-ad93-2e945ea78dc9","Type":"ContainerStarted","Data":"1faf807e584a4bcf5cec2a291f5170a0f2c6fc9a329390ae16cdd45c403e4d1d"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.339009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" event={"ID":"f78ac4e9-599f-466f-ad93-2e945ea78dc9","Type":"ContainerStarted","Data":"798aed91174f8e26ce146ab6e8cf35b8b366d60f9db8b622d7afee19d582b405"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.354319 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" podStartSLOduration=135.349757441 podStartE2EDuration="2m15.349757441s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.335682971 +0000 UTC m=+156.601293880" watchObservedRunningTime="2026-02-18 16:32:16.349757441 +0000 UTC m=+156.615368350" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.389494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" event={"ID":"ce646036-070b-4e97-bce1-afff187c3c83","Type":"ContainerStarted","Data":"f50a74e343cd389cca495ca1e117fe4480b7b6301ee49e78b7bf71d70291630a"} Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.407564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.408451 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.408660 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:16.908621327 +0000 UTC m=+157.174232416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.409060 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.410594 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" podStartSLOduration=135.41054288 podStartE2EDuration="2m15.41054288s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.409314423 +0000 UTC m=+156.674925342" watchObservedRunningTime="2026-02-18 16:32:16.41054288 +0000 UTC m=+156.676153789" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.421279 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qsrdf" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.454204 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" podStartSLOduration=135.454180601 podStartE2EDuration="2m15.454180601s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.453620129 +0000 UTC m=+156.719231038" watchObservedRunningTime="2026-02-18 16:32:16.454180601 +0000 UTC m=+156.719791510" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.468593 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zqqrs" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.488369 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-chx4h" podStartSLOduration=135.488350214 podStartE2EDuration="2m15.488350214s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.48726931 +0000 UTC m=+156.752880229" watchObservedRunningTime="2026-02-18 16:32:16.488350214 +0000 UTC m=+156.753961113" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.509972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.516918 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.016896373 +0000 UTC m=+157.282507282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.548283 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2kptm" podStartSLOduration=7.548257104 podStartE2EDuration="7.548257104s" podCreationTimestamp="2026-02-18 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.545779559 +0000 UTC m=+156.811390458" watchObservedRunningTime="2026-02-18 16:32:16.548257104 +0000 UTC m=+156.813868013" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.612283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.612621 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.11257223 +0000 UTC m=+157.378183139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.613045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.613567 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.113549792 +0000 UTC m=+157.379160701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.622806 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" podStartSLOduration=135.622781475 podStartE2EDuration="2m15.622781475s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.576017805 +0000 UTC m=+156.841628714" watchObservedRunningTime="2026-02-18 16:32:16.622781475 +0000 UTC m=+156.888392384" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.715874 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.717675 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.217621665 +0000 UTC m=+157.483232644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.794902 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lwcr8" podStartSLOduration=135.794867547 podStartE2EDuration="2m15.794867547s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:16.730372726 +0000 UTC m=+156.995983655" watchObservedRunningTime="2026-02-18 16:32:16.794867547 +0000 UTC m=+157.060478456" Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.817837 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.818302 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.318286102 +0000 UTC m=+157.583897011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:16 crc kubenswrapper[4812]: I0218 16:32:16.924116 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:16 crc kubenswrapper[4812]: E0218 16:32:16.924488 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.424448171 +0000 UTC m=+157.690059190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.025843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.027073 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.527049072 +0000 UTC m=+157.792659981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.129911 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.130418 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.630384698 +0000 UTC m=+157.895995607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.218713 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:17 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:17 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:17 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.218822 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.231903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.232371 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.732355254 +0000 UTC m=+157.997966163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.333697 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.334036 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.834006164 +0000 UTC m=+158.099617073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.334194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.334593 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.834583166 +0000 UTC m=+158.100194075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.399528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" event={"ID":"d58bf47e-907b-42f2-89b0-919ee60b253e","Type":"ContainerStarted","Data":"375437127a720bee6b448c7b1202dcc3a3a8209134317283065b8f91c8ea67d4"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.401507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" event={"ID":"9cc05670-962b-48fc-a2c3-ad79a606f32c","Type":"ContainerStarted","Data":"385b17cde66df9ef5cdf5c53a623bc9b9e9db48808181c24444c83c686fd4a5d"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.403248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" event={"ID":"ce646036-070b-4e97-bce1-afff187c3c83","Type":"ContainerStarted","Data":"ec3d4718c91aee07b23a565bb1b54938619c43b78b2bd36ad7739f697881b2c6"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.406155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kjggh" event={"ID":"7db280f9-3aeb-4461-b688-25611e6b3694","Type":"ContainerStarted","Data":"5f227a3519f7266b3ea956a8781cb8470ed956140de9f94d11ef466fc714b745"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.406234 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kjggh" event={"ID":"7db280f9-3aeb-4461-b688-25611e6b3694","Type":"ContainerStarted","Data":"2336058d5a1f88da66f4f18acfe503b90d7dd680a5787da22b90cfc90a95aa27"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.406262 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.408255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" event={"ID":"cf0c0c98-dc78-4aa8-aefe-df4d889d2582","Type":"ContainerStarted","Data":"5f74e741ff4f14375e77370bf0858eb65dc1c00cd449ddd351cd6e342af4bac5"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.410349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" event={"ID":"ae617623-420a-436a-9ec3-3710fc1735fe","Type":"ContainerStarted","Data":"34b0509282eddeec23e565ad5244e28c4c57e1c323bcbadf4c0ddc47eb8a8fff"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.413535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" event={"ID":"5385719a-9a09-44ac-a1fb-0692c74f2bdf","Type":"ContainerStarted","Data":"cdeccfdd6f111d766ee1a98942a51a35163017938a5afc04e913a26b60327a9b"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.413594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" event={"ID":"5385719a-9a09-44ac-a1fb-0692c74f2bdf","Type":"ContainerStarted","Data":"41cc710344b71c23943645dd24554645a20b738f15da88adfd4dd988d650c6d3"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.419366 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" event={"ID":"f678303a-5eda-4e0c-b70b-e91699765112","Type":"ContainerStarted","Data":"80702f26d6482ff3bf95ec4e6584bfc52a0ca24ce402377f842cfbb9cdaabc33"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.420080 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-trkrb" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.421492 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l4krn" event={"ID":"a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d","Type":"ContainerStarted","Data":"f2ba8b894ffdefe297529e185a5b167523f50e15f7dadba5f9afa9f3354df2e7"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.424559 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" podStartSLOduration=136.424541928 podStartE2EDuration="2m16.424541928s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.422445392 +0000 UTC m=+157.688056301" watchObservedRunningTime="2026-02-18 16:32:17.424541928 +0000 UTC m=+157.690152837" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.425252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" event={"ID":"583f9d4c-ab28-4bbf-a346-116bdf6825cf","Type":"ContainerStarted","Data":"1bc58dcc2c88cc794cc4d3fdbd4cb9d0c26254b47a563e77c4570282c1620654"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.425287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" event={"ID":"583f9d4c-ab28-4bbf-a346-116bdf6825cf","Type":"ContainerStarted","Data":"031ed04e2cc216a6ec65618f61a7fccbf7bb8cf0e4e8a0de2ab8e97f8fa057a1"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.427340 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" event={"ID":"22840f71-c3ca-4199-9bb5-34cc6bab5140","Type":"ContainerStarted","Data":"1ac6f7e08f5f1406070dd76b51509b6601067a7ffd996484156ca2616fd07369"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.430938 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9g4x" event={"ID":"29f5c004-b206-4144-9754-456af64c615e","Type":"ContainerStarted","Data":"d4476ef068cc3086c506f2f54d8fb4a62855a82fc2a01a92662b1b759529c24e"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.434769 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.435272 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:17.935252514 +0000 UTC m=+158.200863423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.437453 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" event={"ID":"f019b253-047e-4a21-9e54-52cdc5835d33","Type":"ContainerStarted","Data":"954ccaa2edcebe8ba06a4a0c92653bd6c99ba536f797caa6822d94df691b89eb"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.444377 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tlq87" podStartSLOduration=136.444356855 podStartE2EDuration="2m16.444356855s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.443622169 +0000 UTC m=+157.709233078" watchObservedRunningTime="2026-02-18 16:32:17.444356855 +0000 UTC m=+157.709967764" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.445941 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2kptm" event={"ID":"63db1dda-de3b-4fcc-aa34-812335a7700b","Type":"ContainerStarted","Data":"9033a6a39a19284e7679a575b9f2cb86b90f8281f64f4522ecb065a547a81d49"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.456736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" event={"ID":"b00e87af-1e21-4c4b-ae20-9da5de7e8176","Type":"ContainerStarted","Data":"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.458220 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.465572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" event={"ID":"4c5f002a-45f3-4079-ac97-d583dfb50984","Type":"ContainerStarted","Data":"069adda67483ab8d7a857a27b139c9fc6303b4477e25f6a6301d9b20b82fafde"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.474641 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tsgtb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.474696 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.478182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" event={"ID":"70067b7f-0a79-444f-8041-8683d4ae95b2","Type":"ContainerStarted","Data":"a5c20be3ccb9c8bf2d4957559857c3edc9667bccd06c19ddd90243231aa8b306"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.489042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" event={"ID":"95d36ab6-23d1-4fa5-ba03-ece8e724b74a","Type":"ContainerStarted","Data":"21fb733e8c9f76fecf232b650b76a51e7814254c06d94b4018d33c2125b0f296"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.489924 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.493575 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w7f9v" podStartSLOduration=136.493547848 podStartE2EDuration="2m16.493547848s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.47771971 +0000 UTC m=+157.743330619" watchObservedRunningTime="2026-02-18 16:32:17.493547848 +0000 UTC m=+157.759158757" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.507878 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" event={"ID":"f5388af9-a696-4215-844e-bbafcd37b2ec","Type":"ContainerStarted","Data":"55de69324d9e9329bbfa32c0b3e095104ff80eb3c2f4bcd9986aa1e23d853941"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.507953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" event={"ID":"f5388af9-a696-4215-844e-bbafcd37b2ec","Type":"ContainerStarted","Data":"a21d5f1c4733a660666b0cd4b57cad8ac7d1290bdcaa6ca8a8956cc8128450f0"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.519653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" event={"ID":"f0cdcf15-ce40-4e82-b45b-8265e256c319","Type":"ContainerStarted","Data":"c5a2621ff172c42f05eefb3b4b3082a0235f77f28599ec7b910a54a974d602af"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.534764 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" event={"ID":"d7f6a188-11db-48fd-b4e0-58abfe97aa07","Type":"ContainerStarted","Data":"d56847fa2189c3bb21e2540cf804c7e13a50399355138a6b8bb963363a2b889c"} Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.535804 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.535875 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.536305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.544447 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.044426849 +0000 UTC m=+158.310037758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.549975 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtzmh" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.553066 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84npp" podStartSLOduration=136.553037269 podStartE2EDuration="2m16.553037269s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.507419154 +0000 UTC m=+157.773030073" watchObservedRunningTime="2026-02-18 16:32:17.553037269 +0000 UTC m=+157.818648178" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.582133 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.601594 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kjggh" podStartSLOduration=8.601570088 podStartE2EDuration="8.601570088s" podCreationTimestamp="2026-02-18 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.544377248 +0000 UTC m=+157.809988167" watchObservedRunningTime="2026-02-18 16:32:17.601570088 +0000 UTC m=+157.867180997" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.641593 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.643169 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.143142784 +0000 UTC m=+158.408753683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.662242 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xbxtf" podStartSLOduration=137.662211074 podStartE2EDuration="2m17.662211074s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.660629839 +0000 UTC m=+157.926240748" watchObservedRunningTime="2026-02-18 16:32:17.662211074 +0000 UTC m=+157.927821973" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.746040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.746509 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.246493601 +0000 UTC m=+158.512104510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.767196 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" podStartSLOduration=136.767164996 podStartE2EDuration="2m16.767164996s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.717494602 +0000 UTC m=+157.983105511" watchObservedRunningTime="2026-02-18 16:32:17.767164996 +0000 UTC m=+158.032775905" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.770076 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.770479 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.774151 4812 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xrhdr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.774217 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" podUID="d7f6a188-11db-48fd-b4e0-58abfe97aa07" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.797435 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ztgl" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.847272 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.847909 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.347886254 +0000 UTC m=+158.613497163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.860050 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-flhsn" podStartSLOduration=136.860023001 podStartE2EDuration="2m16.860023001s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.858903606 +0000 UTC m=+158.124514515" watchObservedRunningTime="2026-02-18 16:32:17.860023001 +0000 UTC m=+158.125633910" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.861404 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hzqj7" podStartSLOduration=136.861395811 podStartE2EDuration="2m16.861395811s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.808755302 +0000 UTC m=+158.074366211" watchObservedRunningTime="2026-02-18 16:32:17.861395811 +0000 UTC m=+158.127006720" Feb 18 16:32:17 crc kubenswrapper[4812]: I0218 16:32:17.952480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:17 crc kubenswrapper[4812]: E0218 16:32:17.953156 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.453135762 +0000 UTC m=+158.718746671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.012505 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" podStartSLOduration=138.012468359 podStartE2EDuration="2m18.012468359s" podCreationTimestamp="2026-02-18 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:17.942221522 +0000 UTC m=+158.207832431" watchObservedRunningTime="2026-02-18 16:32:18.012468359 +0000 UTC m=+158.278079278" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.054238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.054918 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.554895964 +0000 UTC m=+158.820506873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.061525 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t2dg5" podStartSLOduration=137.061500669 podStartE2EDuration="2m17.061500669s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:18.013487382 +0000 UTC m=+158.279098291" watchObservedRunningTime="2026-02-18 16:32:18.061500669 +0000 UTC m=+158.327111578" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.062865 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" podStartSLOduration=137.062858339 podStartE2EDuration="2m17.062858339s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:18.059044615 +0000 UTC m=+158.324655544" watchObservedRunningTime="2026-02-18 16:32:18.062858339 +0000 UTC m=+158.328469248" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.108154 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lpl58" podStartSLOduration=137.108125427 podStartE2EDuration="2m17.108125427s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:18.099402965 +0000 UTC m=+158.365013874" watchObservedRunningTime="2026-02-18 16:32:18.108125427 +0000 UTC m=+158.373736346" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.156308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.156959 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.656920571 +0000 UTC m=+158.922531480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.213041 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:18 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:18 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:18 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.213126 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.257333 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.257618 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.757571078 +0000 UTC m=+159.023181987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.257793 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.258282 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.758272004 +0000 UTC m=+159.023882913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.319997 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lsjp8" podStartSLOduration=137.319964743 podStartE2EDuration="2m17.319964743s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:18.319464022 +0000 UTC m=+158.585074951" watchObservedRunningTime="2026-02-18 16:32:18.319964743 +0000 UTC m=+158.585575642" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.361083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.361373 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.861314594 +0000 UTC m=+159.126925493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.361454 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.362230 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.862218644 +0000 UTC m=+159.127829553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.462719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.462957 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.962917232 +0000 UTC m=+159.228528141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.463055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.463428 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:18.963420143 +0000 UTC m=+159.229031052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.538226 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" event={"ID":"f019b253-047e-4a21-9e54-52cdc5835d33","Type":"ContainerStarted","Data":"21970ff805cb25141ad3afee95569b084c20c78f150683ccff3d09de2255f8fa"} Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.546505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" event={"ID":"d7f6a188-11db-48fd-b4e0-58abfe97aa07","Type":"ContainerStarted","Data":"17ff6939af382ff828d7323dab2662723cf35e2ba2cea1d0d3ce93d18671419f"} Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.547131 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.547194 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.547989 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tsgtb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.548008 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.565696 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.566202 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.066179816 +0000 UTC m=+159.331790725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.599730 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.600805 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.639876 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.667225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.668973 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.16894531 +0000 UTC m=+159.434556219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.683918 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.782981 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.783730 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmlq\" (UniqueName: \"kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.783774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.783796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.783896 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.283879942 +0000 UTC m=+159.549490851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.794540 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.795838 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.806677 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.882065 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885172 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885199 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885248 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dk4b\" (UniqueName: \"kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.885348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmlq\" (UniqueName: \"kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.886064 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.886310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.886701 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.386687317 +0000 UTC m=+159.652298226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.897204 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.898375 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.905033 4812 csr.go:261] certificate signing request csr-p64bz is approved, waiting to be issued Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.909669 4812 csr.go:257] certificate signing request csr-p64bz is issued Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.931583 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.949684 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmlq\" (UniqueName: \"kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq\") pod \"certified-operators-86cmv\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.994778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.994988 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.494953182 +0000 UTC m=+159.760564091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995165 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjllp\" (UniqueName: \"kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995243 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995268 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dk4b\" (UniqueName: \"kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995365 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.995990 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: I0218 16:32:18.996692 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:18 crc kubenswrapper[4812]: E0218 16:32:18.996893 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.496883845 +0000 UTC m=+159.762494754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.056246 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.058938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dk4b\" (UniqueName: \"kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b\") pod \"community-operators-4k6w9\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.058957 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.080548 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.096412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.096776 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjllp\" (UniqueName: \"kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.096820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.096838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.097236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.097320 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.597299617 +0000 UTC m=+159.862910526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.097550 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.102641 4812 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.133859 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.169385 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjllp\" (UniqueName: \"kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp\") pod \"certified-operators-z7jvz\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.199617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.199755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.199786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.199827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.200203 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.700187164 +0000 UTC m=+159.965798073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.217374 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:19 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:19 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:19 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.217459 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.223472 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.235188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.300942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.301162 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.801075105 +0000 UTC m=+160.066686014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301223 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301304 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301712 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.301729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.301920 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.801896183 +0000 UTC m=+160.067507092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.337359 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z\") pod \"community-operators-l9xq4\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.398613 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.402567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.402985 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:19.90296183 +0000 UTC m=+160.168572739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.511046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.511829 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:20.011813148 +0000 UTC m=+160.277424047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.612595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.613142 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 16:32:20.11312078 +0000 UTC m=+160.378731689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.621449 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" event={"ID":"f019b253-047e-4a21-9e54-52cdc5835d33","Type":"ContainerStarted","Data":"62e6f40d56f60f483b9819163df9aca9cd4aef3d38149c265c76847eaf09297a"} Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.621494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" event={"ID":"f019b253-047e-4a21-9e54-52cdc5835d33","Type":"ContainerStarted","Data":"1b1f19a2ab3e93da82c932da4fa70240944657a0ee6b7bd1f886bd3c67a94976"} Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.637574 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.695239 4812 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T16:32:19.102681316Z","Handler":null,"Name":""} Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.716853 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:19 crc kubenswrapper[4812]: E0218 16:32:19.717310 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 16:32:20.217293205 +0000 UTC m=+160.482904104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sqzbm" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.754595 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9lm7j" podStartSLOduration=10.754567256 podStartE2EDuration="10.754567256s" podCreationTimestamp="2026-02-18 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:19.692757244 +0000 UTC m=+159.958368153" watchObservedRunningTime="2026-02-18 16:32:19.754567256 +0000 UTC m=+160.020178165" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.813395 4812 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.813444 4812 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.817914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.845060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.873012 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.949971 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 16:27:18 +0000 UTC, rotation deadline is 2026-12-28 15:04:17.603337087 +0000 UTC Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.950017 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7510h31m57.653324016s for next certificate rotation Feb 18 16:32:19 crc kubenswrapper[4812]: I0218 16:32:19.951667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.003621 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.003694 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.018016 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.090733 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:32:20 crc kubenswrapper[4812]: W0218 16:32:20.101753 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd50996_0863_4c12_87b4_3e771a829d07.slice/crio-365352f96f2d1b2f38ad7dff007503ea3a4ae433f56133e0c1e52ddb812a7734 WatchSource:0}: Error finding container 365352f96f2d1b2f38ad7dff007503ea3a4ae433f56133e0c1e52ddb812a7734: Status 404 returned error can't find the container with id 365352f96f2d1b2f38ad7dff007503ea3a4ae433f56133e0c1e52ddb812a7734 Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.221046 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:20 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:20 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:20 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.221130 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.306160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sqzbm\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.309649 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.451420 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.452750 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.454769 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.466533 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.502382 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.544071 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.568718 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2kd\" (UniqueName: \"kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.568792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.568843 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.635652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerStarted","Data":"e49dab86d9882e74e892ee7fa8dea3597b07869028aef372953e565d355691ae"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.647857 4812 generic.go:334] "Generic (PLEG): container finished" podID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerID="834e413691f1e162ef386f7418d0966ee38ec5ec407fd8911fd50866b7a751a6" exitCode=0 Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.647950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerDied","Data":"834e413691f1e162ef386f7418d0966ee38ec5ec407fd8911fd50866b7a751a6"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.647992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerStarted","Data":"808ed2b88098a7143c62793121d1daa869aba5ce1cbc5f781a97a7cecf2a6841"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.652164 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.660784 4812 generic.go:334] "Generic (PLEG): container finished" podID="3913399c-b196-44e0-a381-0526a310bb4b" containerID="99a7f65043360dce38b5bbcf09e7814693441a44ac72c01ed0a8137f9b53f1a1" exitCode=0 Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.660861 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerDied","Data":"99a7f65043360dce38b5bbcf09e7814693441a44ac72c01ed0a8137f9b53f1a1"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.660899 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerStarted","Data":"1f2d008ae053d4176c63f844c409bea8a31edc27c99aedaa8231738b7b7edc97"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.670008 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.670139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2kd\" (UniqueName: \"kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.670161 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.671179 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.671395 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.694557 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd50996-0863-4c12-87b4-3e771a829d07" containerID="7ba9d4114aba35b68c3e72edcfd02cc96452be5349b7e7e2ef9b0f06c3286f19" exitCode=0 Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.695887 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerDied","Data":"7ba9d4114aba35b68c3e72edcfd02cc96452be5349b7e7e2ef9b0f06c3286f19"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.695917 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerStarted","Data":"365352f96f2d1b2f38ad7dff007503ea3a4ae433f56133e0c1e52ddb812a7734"} Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.713055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2kd\" (UniqueName: \"kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd\") pod \"redhat-marketplace-st44b\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.861585 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.875224 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.881670 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.896297 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:32:20 crc kubenswrapper[4812]: I0218 16:32:20.964721 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.080116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.080174 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dr5t\" (UniqueName: \"kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.080214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.181028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.181145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.181180 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dr5t\" (UniqueName: \"kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.182086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.182197 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.208001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dr5t\" (UniqueName: \"kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t\") pod \"redhat-marketplace-pmn4k\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.213026 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.214429 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:21 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:21 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:21 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.214516 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.257078 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.509289 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.716383 4812 generic.go:334] "Generic (PLEG): container finished" podID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerID="92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd" exitCode=0 Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.716469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerDied","Data":"92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.716521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerStarted","Data":"04c89ba5a01e405f21d29ccb103948ce48f7809152391ed386f02d1b16700204"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.744940 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" event={"ID":"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f","Type":"ContainerStarted","Data":"927225b8dac62cf92eb7085bdadfc63e7ecbb7eead023cdbd1f2bb81c28111af"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.745672 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" event={"ID":"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f","Type":"ContainerStarted","Data":"3d21ee9c004d20bbdd0d0ef98a692d93d20d7463fef24cef6a6aff1272b1b966"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.745708 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.762613 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerID="e42f840ff70cf9859823032ba3ce19bfdf730b3589817c4eb83c5a3c361b7750" exitCode=0 Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.762697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerDied","Data":"e42f840ff70cf9859823032ba3ce19bfdf730b3589817c4eb83c5a3c361b7750"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.772005 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerStarted","Data":"3baaff127a8a87b931e00727b3b9dfa770c4c27e780b1df21cd04f66a37d3f83"} Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.806807 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" podStartSLOduration=140.806781926 podStartE2EDuration="2m20.806781926s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:21.768585214 +0000 UTC m=+162.034196133" watchObservedRunningTime="2026-02-18 16:32:21.806781926 +0000 UTC m=+162.072392835" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.819216 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.820003 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.820247 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.826782 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.827086 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.852814 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.853895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.859653 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.879237 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.904587 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.904639 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.929239 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.930665 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.931359 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.947214 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.947389 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 16:32:21 crc kubenswrapper[4812]: I0218 16:32:21.948287 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.008208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmskk\" (UniqueName: \"kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.008281 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.008326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.008363 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.008408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.021371 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.021424 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.022699 4812 patch_prober.go:28] interesting pod/console-f9d7485db-blqkx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.022785 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-blqkx" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.110064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmskk\" (UniqueName: \"kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.110189 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.110346 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.110266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.112747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.113145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.113181 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.113718 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.113805 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.114290 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.136002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.137143 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmskk\" (UniqueName: \"kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk\") pod \"redhat-operators-lvsc5\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.207546 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.209457 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.215375 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:22 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:22 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:22 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.215462 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.215937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.216051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.217976 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.228062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.236540 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.241334 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.242456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.259871 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.275875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.419351 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.419851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.419923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhdz\" (UniqueName: \"kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.521680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.521766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhdz\" (UniqueName: \"kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.521845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.522991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.524303 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.527668 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.527677 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.527733 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.527756 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.562963 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhdz\" (UniqueName: \"kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz\") pod \"redhat-operators-mzvbp\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.569342 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.618368 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:32:22 crc kubenswrapper[4812]: W0218 16:32:22.655341 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8351117_bbbe_446f_a319_2bd48f5f6f4b.slice/crio-8546c634a20e764fbaabfc9eef5d89d7f9b68e48504011823fe5a7f4feb474d3 WatchSource:0}: Error finding container 8546c634a20e764fbaabfc9eef5d89d7f9b68e48504011823fe5a7f4feb474d3: Status 404 returned error can't find the container with id 8546c634a20e764fbaabfc9eef5d89d7f9b68e48504011823fe5a7f4feb474d3 Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.792303 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.800224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xrhdr" Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.828431 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce646036-070b-4e97-bce1-afff187c3c83" containerID="ec3d4718c91aee07b23a565bb1b54938619c43b78b2bd36ad7739f697881b2c6" exitCode=0 Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.828533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" event={"ID":"ce646036-070b-4e97-bce1-afff187c3c83","Type":"ContainerDied","Data":"ec3d4718c91aee07b23a565bb1b54938619c43b78b2bd36ad7739f697881b2c6"} Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.830451 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.873579 4812 generic.go:334] "Generic (PLEG): container finished" podID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerID="e78d0429ffe133e90a6e95007b21adaa8f642fcbfbdde8a766089a9fb8feebc7" exitCode=0 Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.874462 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerDied","Data":"e78d0429ffe133e90a6e95007b21adaa8f642fcbfbdde8a766089a9fb8feebc7"} Feb 18 16:32:22 crc kubenswrapper[4812]: W0218 16:32:22.892066 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1a40ae0d_9e9d_4621_ad6a_b10a865d0a03.slice/crio-ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c WatchSource:0}: Error finding container ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c: Status 404 returned error can't find the container with id ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.895203 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.913703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerStarted","Data":"8546c634a20e764fbaabfc9eef5d89d7f9b68e48504011823fe5a7f4feb474d3"} Feb 18 16:32:22 crc kubenswrapper[4812]: I0218 16:32:22.942748 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d9qws" Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.053054 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.216311 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 16:32:23 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 18 16:32:23 crc kubenswrapper[4812]: [+]process-running ok Feb 18 16:32:23 crc kubenswrapper[4812]: healthz check failed Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.216690 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.934679 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e363386-369f-49c6-8412-c72c0c3a0433" containerID="310140a006b940aaefb1d5c7de0d6920df99551bf2921d37c5c00ae5ae938ea5" exitCode=0 Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.934966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerDied","Data":"310140a006b940aaefb1d5c7de0d6920df99551bf2921d37c5c00ae5ae938ea5"} Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.935065 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerStarted","Data":"b53047a0176796ee2c08bada30304be7d55a4f7e648ed08a8c6d98c8a4235759"} Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.942517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"89c4336f-6ddc-4759-9098-1a1391143da1","Type":"ContainerStarted","Data":"68b1faefff01499995ba8c0ecd21d3e19bbe4ecd0a2d6e773c96f490348008dd"} Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.949708 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03","Type":"ContainerStarted","Data":"ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c"} Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.990316 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerID="5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe" exitCode=0 Feb 18 16:32:23 crc kubenswrapper[4812]: I0218 16:32:23.992391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerDied","Data":"5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe"} Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.216809 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.231957 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xs668" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.406494 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.468878 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume\") pod \"ce646036-070b-4e97-bce1-afff187c3c83\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.469002 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume\") pod \"ce646036-070b-4e97-bce1-afff187c3c83\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.469065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tlr4\" (UniqueName: \"kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4\") pod \"ce646036-070b-4e97-bce1-afff187c3c83\" (UID: \"ce646036-070b-4e97-bce1-afff187c3c83\") " Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.470237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce646036-070b-4e97-bce1-afff187c3c83" (UID: "ce646036-070b-4e97-bce1-afff187c3c83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.488114 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4" (OuterVolumeSpecName: "kube-api-access-5tlr4") pod "ce646036-070b-4e97-bce1-afff187c3c83" (UID: "ce646036-070b-4e97-bce1-afff187c3c83"). InnerVolumeSpecName "kube-api-access-5tlr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.488066 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce646036-070b-4e97-bce1-afff187c3c83" (UID: "ce646036-070b-4e97-bce1-afff187c3c83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.571648 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce646036-070b-4e97-bce1-afff187c3c83-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.571690 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tlr4\" (UniqueName: \"kubernetes.io/projected/ce646036-070b-4e97-bce1-afff187c3c83-kube-api-access-5tlr4\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.571702 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce646036-070b-4e97-bce1-afff187c3c83-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.673454 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.685014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/713f6ad5-53d1-453f-a193-e8ab26e31b0e-metrics-certs\") pod \"network-metrics-daemon-5cqfx\" (UID: \"713f6ad5-53d1-453f-a193-e8ab26e31b0e\") " pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:24 crc kubenswrapper[4812]: I0218 16:32:24.932691 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5cqfx" Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.060052 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" event={"ID":"ce646036-070b-4e97-bce1-afff187c3c83","Type":"ContainerDied","Data":"f50a74e343cd389cca495ca1e117fe4480b7b6301ee49e78b7bf71d70291630a"} Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.060123 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50a74e343cd389cca495ca1e117fe4480b7b6301ee49e78b7bf71d70291630a" Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.060075 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4" Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.071355 4812 generic.go:334] "Generic (PLEG): container finished" podID="89c4336f-6ddc-4759-9098-1a1391143da1" containerID="daa543c4398f45d69434544d1730eedd678d010e38985e8e851fee692facc285" exitCode=0 Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.071489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"89c4336f-6ddc-4759-9098-1a1391143da1","Type":"ContainerDied","Data":"daa543c4398f45d69434544d1730eedd678d010e38985e8e851fee692facc285"} Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.081952 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" containerID="6bfecb6c1db52497b7175aabf0e7508c6b7b76510f179de48aff7c1ab49455d4" exitCode=0 Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.082120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03","Type":"ContainerDied","Data":"6bfecb6c1db52497b7175aabf0e7508c6b7b76510f179de48aff7c1ab49455d4"} Feb 18 16:32:25 crc kubenswrapper[4812]: I0218 16:32:25.326189 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5cqfx"] Feb 18 16:32:25 crc kubenswrapper[4812]: W0218 16:32:25.378688 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod713f6ad5_53d1_453f_a193_e8ab26e31b0e.slice/crio-db97095eca3ff0cbb9010cdcbbec981711d7fca1bff4b5ccdf2f33cf87fe3441 WatchSource:0}: Error finding container db97095eca3ff0cbb9010cdcbbec981711d7fca1bff4b5ccdf2f33cf87fe3441: Status 404 returned error can't find the container with id db97095eca3ff0cbb9010cdcbbec981711d7fca1bff4b5ccdf2f33cf87fe3441 Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.141613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" event={"ID":"713f6ad5-53d1-453f-a193-e8ab26e31b0e","Type":"ContainerStarted","Data":"db97095eca3ff0cbb9010cdcbbec981711d7fca1bff4b5ccdf2f33cf87fe3441"} Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.472887 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.526524 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir\") pod \"89c4336f-6ddc-4759-9098-1a1391143da1\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.526736 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access\") pod \"89c4336f-6ddc-4759-9098-1a1391143da1\" (UID: \"89c4336f-6ddc-4759-9098-1a1391143da1\") " Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.528237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89c4336f-6ddc-4759-9098-1a1391143da1" (UID: "89c4336f-6ddc-4759-9098-1a1391143da1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.546067 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89c4336f-6ddc-4759-9098-1a1391143da1" (UID: "89c4336f-6ddc-4759-9098-1a1391143da1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.568021 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.628754 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access\") pod \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.628977 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir\") pod \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\" (UID: \"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03\") " Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.629228 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4336f-6ddc-4759-9098-1a1391143da1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.629246 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c4336f-6ddc-4759-9098-1a1391143da1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.629310 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" (UID: "1a40ae0d-9e9d-4621-ad6a-b10a865d0a03"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.638916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" (UID: "1a40ae0d-9e9d-4621-ad6a-b10a865d0a03"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.730483 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:26 crc kubenswrapper[4812]: I0218 16:32:26.730523 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a40ae0d-9e9d-4621-ad6a-b10a865d0a03-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.188333 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"89c4336f-6ddc-4759-9098-1a1391143da1","Type":"ContainerDied","Data":"68b1faefff01499995ba8c0ecd21d3e19bbe4ecd0a2d6e773c96f490348008dd"} Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.188391 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b1faefff01499995ba8c0ecd21d3e19bbe4ecd0a2d6e773c96f490348008dd" Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.188467 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.216273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1a40ae0d-9e9d-4621-ad6a-b10a865d0a03","Type":"ContainerDied","Data":"ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c"} Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.216343 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1ecd3fb6d03f7dbfe110abb9cea86b80ff20c5c4089b69f5b87f168856fc0c" Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.216399 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.218926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" event={"ID":"713f6ad5-53d1-453f-a193-e8ab26e31b0e","Type":"ContainerStarted","Data":"6833878e8737d72cafdd58855aa2531647d44c14b7c1fedaf7880b74d4f73996"} Feb 18 16:32:27 crc kubenswrapper[4812]: I0218 16:32:27.994031 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kjggh" Feb 18 16:32:28 crc kubenswrapper[4812]: I0218 16:32:28.256605 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5cqfx" event={"ID":"713f6ad5-53d1-453f-a193-e8ab26e31b0e","Type":"ContainerStarted","Data":"3e0e6533f6bff1dc22564372b46742285e40929c4f311c269a046e155aef0ac4"} Feb 18 16:32:28 crc kubenswrapper[4812]: I0218 16:32:28.285808 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5cqfx" podStartSLOduration=147.285788876 podStartE2EDuration="2m27.285788876s" podCreationTimestamp="2026-02-18 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:28.283050535 +0000 UTC m=+168.548661474" watchObservedRunningTime="2026-02-18 16:32:28.285788876 +0000 UTC m=+168.551399785" Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.026455 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.031522 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.527965 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.528047 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.528532 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-chx4h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 18 16:32:32 crc kubenswrapper[4812]: I0218 16:32:32.528557 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-chx4h" podUID="4428dc60-fd63-4b22-8589-08c8ac3dde08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 18 16:32:33 crc kubenswrapper[4812]: I0218 16:32:33.414566 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:32:33 crc kubenswrapper[4812]: I0218 16:32:33.414656 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:32:36 crc kubenswrapper[4812]: I0218 16:32:36.108756 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:36 crc kubenswrapper[4812]: I0218 16:32:36.109407 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" containerID="cri-o://3c7b7b7b672f98b68ac9f11a9bba624c5cf1d3f90fcd816bcd2e5aa06b77b22f" gracePeriod=30 Feb 18 16:32:36 crc kubenswrapper[4812]: I0218 16:32:36.155998 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:36 crc kubenswrapper[4812]: I0218 16:32:36.156349 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerName="route-controller-manager" containerID="cri-o://b77f4b138cb3ecb46f6f35c1677ff50c7cc643b5342bbc797a3ea8bce79993d0" gracePeriod=30 Feb 18 16:32:37 crc kubenswrapper[4812]: I0218 16:32:37.337212 4812 generic.go:334] "Generic (PLEG): container finished" podID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerID="3c7b7b7b672f98b68ac9f11a9bba624c5cf1d3f90fcd816bcd2e5aa06b77b22f" exitCode=0 Feb 18 16:32:37 crc kubenswrapper[4812]: I0218 16:32:37.337311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" event={"ID":"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5","Type":"ContainerDied","Data":"3c7b7b7b672f98b68ac9f11a9bba624c5cf1d3f90fcd816bcd2e5aa06b77b22f"} Feb 18 16:32:37 crc kubenswrapper[4812]: I0218 16:32:37.340076 4812 generic.go:334] "Generic (PLEG): container finished" podID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerID="b77f4b138cb3ecb46f6f35c1677ff50c7cc643b5342bbc797a3ea8bce79993d0" exitCode=0 Feb 18 16:32:37 crc kubenswrapper[4812]: I0218 16:32:37.340132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" event={"ID":"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2","Type":"ContainerDied","Data":"b77f4b138cb3ecb46f6f35c1677ff50c7cc643b5342bbc797a3ea8bce79993d0"} Feb 18 16:32:40 crc kubenswrapper[4812]: I0218 16:32:40.516483 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.854922 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.910453 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:41 crc kubenswrapper[4812]: E0218 16:32:41.910893 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.910935 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: E0218 16:32:41.910978 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerName="route-controller-manager" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.910998 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerName="route-controller-manager" Feb 18 16:32:41 crc kubenswrapper[4812]: E0218 16:32:41.911020 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c4336f-6ddc-4759-9098-1a1391143da1" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911039 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c4336f-6ddc-4759-9098-1a1391143da1" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: E0218 16:32:41.911077 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce646036-070b-4e97-bce1-afff187c3c83" containerName="collect-profiles" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911137 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce646036-070b-4e97-bce1-afff187c3c83" containerName="collect-profiles" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911385 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce646036-070b-4e97-bce1-afff187c3c83" containerName="collect-profiles" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911423 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" containerName="route-controller-manager" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911453 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c4336f-6ddc-4759-9098-1a1391143da1" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.911491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a40ae0d-9e9d-4621-ad6a-b10a865d0a03" containerName="pruner" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.912427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:41 crc kubenswrapper[4812]: I0218 16:32:41.926558 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.010310 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config\") pod \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.010462 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca\") pod \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.010497 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6588p\" (UniqueName: \"kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p\") pod \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.010550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert\") pod \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\" (UID: \"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2\") " Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.011183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.011241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.011665 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bl8j\" (UniqueName: \"kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.011694 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.012198 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" (UID: "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.012221 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config" (OuterVolumeSpecName: "config") pod "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" (UID: "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.019289 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p" (OuterVolumeSpecName: "kube-api-access-6588p") pod "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" (UID: "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2"). InnerVolumeSpecName "kube-api-access-6588p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.020150 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" (UID: "84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113861 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bl8j\" (UniqueName: \"kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113892 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113983 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.113996 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6588p\" (UniqueName: \"kubernetes.io/projected/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-kube-api-access-6588p\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.114007 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.114016 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.115034 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.115422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.121820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.144698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bl8j\" (UniqueName: \"kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j\") pod \"route-controller-manager-b4c5d6bfb-f7fv2\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.235641 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.388542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" event={"ID":"84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2","Type":"ContainerDied","Data":"34b642dae474828ae410a3dba2dc5bb1ad49cab46bd87365f3757ffa5e66a05a"} Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.388696 4812 scope.go:117] "RemoveContainer" containerID="b77f4b138cb3ecb46f6f35c1677ff50c7cc643b5342bbc797a3ea8bce79993d0" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.388996 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.436700 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.448239 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vt8tn"] Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.519863 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2" path="/var/lib/kubelet/pods/84b3e4b2-c94a-45f7-a559-dc24dbb4b5c2/volumes" Feb 18 16:32:42 crc kubenswrapper[4812]: I0218 16:32:42.535388 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-chx4h" Feb 18 16:32:43 crc kubenswrapper[4812]: I0218 16:32:43.793773 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t65dk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 16:32:43 crc kubenswrapper[4812]: I0218 16:32:43.794061 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 16:32:48 crc kubenswrapper[4812]: I0218 16:32:48.656047 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 16:32:52 crc kubenswrapper[4812]: I0218 16:32:52.880809 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rk27v" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.773926 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.795662 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t65dk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: i/o timeout" start-of-body= Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.795725 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: i/o timeout" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.805923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config\") pod \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.806022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles\") pod \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.806177 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert\") pod \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.806206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdnck\" (UniqueName: \"kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck\") pod \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.806232 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca\") pod \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\" (UID: \"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5\") " Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.807139 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" (UID: "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.807306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config" (OuterVolumeSpecName: "config") pod "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" (UID: "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.807715 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.807740 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.808865 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" (UID: "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.813356 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.813689 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.813707 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.813828 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" containerName="controller-manager" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.816237 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.819014 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck" (OuterVolumeSpecName: "kube-api-access-hdnck") pod "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" (UID: "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5"). InnerVolumeSpecName "kube-api-access-hdnck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.822629 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.845829 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" (UID: "3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.882939 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.883144 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h2kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-st44b_openshift-marketplace(18e4e3fe-6d0e-4509-8275-ba450daa2602): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.884371 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-st44b" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kn5\" (UniqueName: \"kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908752 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908815 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908870 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908881 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdnck\" (UniqueName: \"kubernetes.io/projected/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-kube-api-access-hdnck\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:53 crc kubenswrapper[4812]: I0218 16:32:53.908890 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.929310 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.929707 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dk4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4k6w9_openshift-marketplace(3913399c-b196-44e0-a381-0526a310bb4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 16:32:53 crc kubenswrapper[4812]: E0218 16:32:53.930870 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4k6w9" podUID="3913399c-b196-44e0-a381-0526a310bb4b" Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.018689 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.018913 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqhdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mzvbp_openshift-marketplace(3e363386-369f-49c6-8412-c72c0c3a0433): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.020071 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mzvbp" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.027700 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.027790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.027847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.027867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.027994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kn5\" (UniqueName: \"kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.031286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.032286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.034305 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.041910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.045891 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kn5\" (UniqueName: \"kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5\") pod \"controller-manager-85dc6f56fb-nlxfq\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.108740 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:54 crc kubenswrapper[4812]: W0218 16:32:54.126038 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3730c595_8d82_4ca2_91d5_4114f4b86728.slice/crio-d656ba6247e2f3d327549a5cdd23249af8f3f7a9215e4cfba5f34d7d0c6004c2 WatchSource:0}: Error finding container d656ba6247e2f3d327549a5cdd23249af8f3f7a9215e4cfba5f34d7d0c6004c2: Status 404 returned error can't find the container with id d656ba6247e2f3d327549a5cdd23249af8f3f7a9215e4cfba5f34d7d0c6004c2 Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.191648 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.470669 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:54 crc kubenswrapper[4812]: W0218 16:32:54.487309 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35913594_c8bc_4e10_9d44_89fabf52cdd7.slice/crio-1e5112b06d04bb584e123d1cadcbf043dc9089d7780b38c82743e247cfaa2701 WatchSource:0}: Error finding container 1e5112b06d04bb584e123d1cadcbf043dc9089d7780b38c82743e247cfaa2701: Status 404 returned error can't find the container with id 1e5112b06d04bb584e123d1cadcbf043dc9089d7780b38c82743e247cfaa2701 Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.487632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerStarted","Data":"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.492910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerStarted","Data":"7669e7a6805c730aaa644fe5e70ad4291257da83d9db4b2226a4a86ef845174d"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.498048 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerStarted","Data":"b21c08e4e86042f4e6335ef26520b0de980aa233d50c899c11e90fd9c090756c"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.505854 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" event={"ID":"3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5","Type":"ContainerDied","Data":"8cf4d91dee56f64ad5c5f223f44a2333af4477bcdc5cb0b67970e38d44bdcbe3"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.505918 4812 scope.go:117] "RemoveContainer" containerID="3c7b7b7b672f98b68ac9f11a9bba624c5cf1d3f90fcd816bcd2e5aa06b77b22f" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.506007 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t65dk" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.525837 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" event={"ID":"3730c595-8d82-4ca2-91d5-4114f4b86728","Type":"ContainerStarted","Data":"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.525902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" event={"ID":"3730c595-8d82-4ca2-91d5-4114f4b86728","Type":"ContainerStarted","Data":"d656ba6247e2f3d327549a5cdd23249af8f3f7a9215e4cfba5f34d7d0c6004c2"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.526475 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.529373 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerStarted","Data":"e916a9e745de1ec5d1575026b683afb076f377985168dde2341d1640bfc1f6b1"} Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.554278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerStarted","Data":"966537d4ec686fab14536c4994f21d6e1a04314a79826cceac915cff036ba366"} Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.567437 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-st44b" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.567783 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4k6w9" podUID="3913399c-b196-44e0-a381-0526a310bb4b" Feb 18 16:32:54 crc kubenswrapper[4812]: E0218 16:32:54.568016 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mzvbp" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.663483 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" podStartSLOduration=18.663462068 podStartE2EDuration="18.663462068s" podCreationTimestamp="2026-02-18 16:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:54.641043524 +0000 UTC m=+194.906654423" watchObservedRunningTime="2026-02-18 16:32:54.663462068 +0000 UTC m=+194.929072977" Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.673822 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.677322 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t65dk"] Feb 18 16:32:54 crc kubenswrapper[4812]: I0218 16:32:54.833891 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.551046 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd50996-0863-4c12-87b4-3e771a829d07" containerID="b21c08e4e86042f4e6335ef26520b0de980aa233d50c899c11e90fd9c090756c" exitCode=0 Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.551513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerDied","Data":"b21c08e4e86042f4e6335ef26520b0de980aa233d50c899c11e90fd9c090756c"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.568914 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerID="e916a9e745de1ec5d1575026b683afb076f377985168dde2341d1640bfc1f6b1" exitCode=0 Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.569012 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerDied","Data":"e916a9e745de1ec5d1575026b683afb076f377985168dde2341d1640bfc1f6b1"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.575615 4812 generic.go:334] "Generic (PLEG): container finished" podID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerID="966537d4ec686fab14536c4994f21d6e1a04314a79826cceac915cff036ba366" exitCode=0 Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.575729 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerDied","Data":"966537d4ec686fab14536c4994f21d6e1a04314a79826cceac915cff036ba366"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.586232 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" event={"ID":"35913594-c8bc-4e10-9d44-89fabf52cdd7","Type":"ContainerStarted","Data":"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.586320 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" event={"ID":"35913594-c8bc-4e10-9d44-89fabf52cdd7","Type":"ContainerStarted","Data":"1e5112b06d04bb584e123d1cadcbf043dc9089d7780b38c82743e247cfaa2701"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.586349 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.592940 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.595787 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerID="d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c" exitCode=0 Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.595860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerDied","Data":"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.598858 4812 generic.go:334] "Generic (PLEG): container finished" podID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerID="7669e7a6805c730aaa644fe5e70ad4291257da83d9db4b2226a4a86ef845174d" exitCode=0 Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.599411 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerDied","Data":"7669e7a6805c730aaa644fe5e70ad4291257da83d9db4b2226a4a86ef845174d"} Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.729491 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" podStartSLOduration=19.729465721 podStartE2EDuration="19.729465721s" podCreationTimestamp="2026-02-18 16:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:55.72670136 +0000 UTC m=+195.992312269" watchObservedRunningTime="2026-02-18 16:32:55.729465721 +0000 UTC m=+195.995076630" Feb 18 16:32:55 crc kubenswrapper[4812]: I0218 16:32:55.948158 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:56 crc kubenswrapper[4812]: I0218 16:32:56.051860 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:56 crc kubenswrapper[4812]: I0218 16:32:56.515731 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5" path="/var/lib/kubelet/pods/3a1bfb3e-1d99-4bcd-84ce-347d5fd9b6f5/volumes" Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.615560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerStarted","Data":"da474194ba4f5e6c1caced7e7221337f5b8a9c5fae285c6bd9e88f6efce2a580"} Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.617851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerStarted","Data":"f797d60019c86bd79d910e4d6fa5c49fce67349b08fa7ee8c0a4fe236f7bf822"} Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.619786 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerStarted","Data":"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b"} Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.621606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerStarted","Data":"8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c"} Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.623440 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerStarted","Data":"ffa1ed499759eac86396c765ade08dd25c97be9a169f7a4538803369792bcec4"} Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.623622 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" podUID="3730c595-8d82-4ca2-91d5-4114f4b86728" containerName="route-controller-manager" containerID="cri-o://c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457" gracePeriod=30 Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.623867 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" podUID="35913594-c8bc-4e10-9d44-89fabf52cdd7" containerName="controller-manager" containerID="cri-o://0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d" gracePeriod=30 Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.651469 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l9xq4" podStartSLOduration=3.415267519 podStartE2EDuration="38.651442781s" podCreationTimestamp="2026-02-18 16:32:19 +0000 UTC" firstStartedPulling="2026-02-18 16:32:21.771378476 +0000 UTC m=+162.036989385" lastFinishedPulling="2026-02-18 16:32:57.007553738 +0000 UTC m=+197.273164647" observedRunningTime="2026-02-18 16:32:57.650475971 +0000 UTC m=+197.916086890" watchObservedRunningTime="2026-02-18 16:32:57.651442781 +0000 UTC m=+197.917053690" Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.681855 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86cmv" podStartSLOduration=3.606242446 podStartE2EDuration="39.68182751s" podCreationTimestamp="2026-02-18 16:32:18 +0000 UTC" firstStartedPulling="2026-02-18 16:32:20.706318973 +0000 UTC m=+160.971929882" lastFinishedPulling="2026-02-18 16:32:56.781904047 +0000 UTC m=+197.047514946" observedRunningTime="2026-02-18 16:32:57.681030055 +0000 UTC m=+197.946640974" watchObservedRunningTime="2026-02-18 16:32:57.68182751 +0000 UTC m=+197.947438419" Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.700037 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvsc5" podStartSLOduration=3.841548345 podStartE2EDuration="36.700010218s" podCreationTimestamp="2026-02-18 16:32:21 +0000 UTC" firstStartedPulling="2026-02-18 16:32:23.993484527 +0000 UTC m=+164.259095436" lastFinishedPulling="2026-02-18 16:32:56.85194641 +0000 UTC m=+197.117557309" observedRunningTime="2026-02-18 16:32:57.698698487 +0000 UTC m=+197.964309416" watchObservedRunningTime="2026-02-18 16:32:57.700010218 +0000 UTC m=+197.965621127" Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.724621 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmn4k" podStartSLOduration=3.5368999800000003 podStartE2EDuration="37.724589755s" podCreationTimestamp="2026-02-18 16:32:20 +0000 UTC" firstStartedPulling="2026-02-18 16:32:22.886598563 +0000 UTC m=+163.152209472" lastFinishedPulling="2026-02-18 16:32:57.074288338 +0000 UTC m=+197.339899247" observedRunningTime="2026-02-18 16:32:57.724318936 +0000 UTC m=+197.989929845" watchObservedRunningTime="2026-02-18 16:32:57.724589755 +0000 UTC m=+197.990200664" Feb 18 16:32:57 crc kubenswrapper[4812]: I0218 16:32:57.747938 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7jvz" podStartSLOduration=3.530280099 podStartE2EDuration="39.747910853s" podCreationTimestamp="2026-02-18 16:32:18 +0000 UTC" firstStartedPulling="2026-02-18 16:32:20.651789971 +0000 UTC m=+160.917400880" lastFinishedPulling="2026-02-18 16:32:56.869420725 +0000 UTC m=+197.135031634" observedRunningTime="2026-02-18 16:32:57.747677095 +0000 UTC m=+198.013288014" watchObservedRunningTime="2026-02-18 16:32:57.747910853 +0000 UTC m=+198.013521762" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.054472 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.095808 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:32:58 crc kubenswrapper[4812]: E0218 16:32:58.096159 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3730c595-8d82-4ca2-91d5-4114f4b86728" containerName="route-controller-manager" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.096182 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3730c595-8d82-4ca2-91d5-4114f4b86728" containerName="route-controller-manager" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.096318 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3730c595-8d82-4ca2-91d5-4114f4b86728" containerName="route-controller-manager" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.096869 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.111661 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.122804 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.133353 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca\") pod \"3730c595-8d82-4ca2-91d5-4114f4b86728\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.133447 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bl8j\" (UniqueName: \"kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j\") pod \"3730c595-8d82-4ca2-91d5-4114f4b86728\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.133547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert\") pod \"3730c595-8d82-4ca2-91d5-4114f4b86728\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.133593 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config\") pod \"3730c595-8d82-4ca2-91d5-4114f4b86728\" (UID: \"3730c595-8d82-4ca2-91d5-4114f4b86728\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.134538 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca" (OuterVolumeSpecName: "client-ca") pod "3730c595-8d82-4ca2-91d5-4114f4b86728" (UID: "3730c595-8d82-4ca2-91d5-4114f4b86728"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.134648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config" (OuterVolumeSpecName: "config") pod "3730c595-8d82-4ca2-91d5-4114f4b86728" (UID: "3730c595-8d82-4ca2-91d5-4114f4b86728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.146329 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3730c595-8d82-4ca2-91d5-4114f4b86728" (UID: "3730c595-8d82-4ca2-91d5-4114f4b86728"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.147700 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j" (OuterVolumeSpecName: "kube-api-access-8bl8j") pod "3730c595-8d82-4ca2-91d5-4114f4b86728" (UID: "3730c595-8d82-4ca2-91d5-4114f4b86728"). InnerVolumeSpecName "kube-api-access-8bl8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.234902 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca\") pod \"35913594-c8bc-4e10-9d44-89fabf52cdd7\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.234970 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8kn5\" (UniqueName: \"kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5\") pod \"35913594-c8bc-4e10-9d44-89fabf52cdd7\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235050 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles\") pod \"35913594-c8bc-4e10-9d44-89fabf52cdd7\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235137 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert\") pod \"35913594-c8bc-4e10-9d44-89fabf52cdd7\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config\") pod \"35913594-c8bc-4e10-9d44-89fabf52cdd7\" (UID: \"35913594-c8bc-4e10-9d44-89fabf52cdd7\") " Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235484 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswc5\" (UniqueName: \"kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235700 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bl8j\" (UniqueName: \"kubernetes.io/projected/3730c595-8d82-4ca2-91d5-4114f4b86728-kube-api-access-8bl8j\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235719 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3730c595-8d82-4ca2-91d5-4114f4b86728-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235729 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235740 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3730c595-8d82-4ca2-91d5-4114f4b86728-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.235900 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "35913594-c8bc-4e10-9d44-89fabf52cdd7" (UID: "35913594-c8bc-4e10-9d44-89fabf52cdd7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.236046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca" (OuterVolumeSpecName: "client-ca") pod "35913594-c8bc-4e10-9d44-89fabf52cdd7" (UID: "35913594-c8bc-4e10-9d44-89fabf52cdd7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.236077 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config" (OuterVolumeSpecName: "config") pod "35913594-c8bc-4e10-9d44-89fabf52cdd7" (UID: "35913594-c8bc-4e10-9d44-89fabf52cdd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.239387 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35913594-c8bc-4e10-9d44-89fabf52cdd7" (UID: "35913594-c8bc-4e10-9d44-89fabf52cdd7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.239417 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5" (OuterVolumeSpecName: "kube-api-access-d8kn5") pod "35913594-c8bc-4e10-9d44-89fabf52cdd7" (UID: "35913594-c8bc-4e10-9d44-89fabf52cdd7"). InnerVolumeSpecName "kube-api-access-d8kn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337614 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337694 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dswc5\" (UniqueName: \"kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337734 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337881 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337898 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8kn5\" (UniqueName: \"kubernetes.io/projected/35913594-c8bc-4e10-9d44-89fabf52cdd7-kube-api-access-d8kn5\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337913 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337926 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35913594-c8bc-4e10-9d44-89fabf52cdd7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.337937 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35913594-c8bc-4e10-9d44-89fabf52cdd7-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.339028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.339144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.343014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.359553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dswc5\" (UniqueName: \"kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5\") pod \"route-controller-manager-85d9b4f67-4slp2\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.440879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.642791 4812 generic.go:334] "Generic (PLEG): container finished" podID="35913594-c8bc-4e10-9d44-89fabf52cdd7" containerID="0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d" exitCode=0 Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.642891 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.643003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" event={"ID":"35913594-c8bc-4e10-9d44-89fabf52cdd7","Type":"ContainerDied","Data":"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d"} Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.643112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq" event={"ID":"35913594-c8bc-4e10-9d44-89fabf52cdd7","Type":"ContainerDied","Data":"1e5112b06d04bb584e123d1cadcbf043dc9089d7780b38c82743e247cfaa2701"} Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.643142 4812 scope.go:117] "RemoveContainer" containerID="0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.648976 4812 generic.go:334] "Generic (PLEG): container finished" podID="3730c595-8d82-4ca2-91d5-4114f4b86728" containerID="c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457" exitCode=0 Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.649619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" event={"ID":"3730c595-8d82-4ca2-91d5-4114f4b86728","Type":"ContainerDied","Data":"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457"} Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.649700 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" event={"ID":"3730c595-8d82-4ca2-91d5-4114f4b86728","Type":"ContainerDied","Data":"d656ba6247e2f3d327549a5cdd23249af8f3f7a9215e4cfba5f34d7d0c6004c2"} Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.650014 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.685191 4812 scope.go:117] "RemoveContainer" containerID="0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d" Feb 18 16:32:58 crc kubenswrapper[4812]: E0218 16:32:58.701461 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d\": container with ID starting with 0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d not found: ID does not exist" containerID="0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.701553 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d"} err="failed to get container status \"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d\": rpc error: code = NotFound desc = could not find container \"0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d\": container with ID starting with 0f38d6d40679696dfc1e5d3fd0859a06677ab2c42f97d3013c4fa5e07bf01d6d not found: ID does not exist" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.701635 4812 scope.go:117] "RemoveContainer" containerID="c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.736464 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.749741 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4c5d6bfb-f7fv2"] Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.752858 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.761117 4812 scope.go:117] "RemoveContainer" containerID="c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457" Feb 18 16:32:58 crc kubenswrapper[4812]: E0218 16:32:58.764543 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457\": container with ID starting with c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457 not found: ID does not exist" containerID="c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.764583 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457"} err="failed to get container status \"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457\": rpc error: code = NotFound desc = could not find container \"c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457\": container with ID starting with c37a3bac16c0d1dc4231c66d0c227e9c67df152b3661064c5401a1aaff247457 not found: ID does not exist" Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.774184 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85dc6f56fb-nlxfq"] Feb 18 16:32:58 crc kubenswrapper[4812]: I0218 16:32:58.849539 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:32:58 crc kubenswrapper[4812]: W0218 16:32:58.861676 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78750cc_a36b_4fae_b038_7e1288c3ba06.slice/crio-78a1a143da8b8e50fd8843768bb9c213bdd456306c6d2f65232b2c8b560212c4 WatchSource:0}: Error finding container 78a1a143da8b8e50fd8843768bb9c213bdd456306c6d2f65232b2c8b560212c4: Status 404 returned error can't find the container with id 78a1a143da8b8e50fd8843768bb9c213bdd456306c6d2f65232b2c8b560212c4 Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.224650 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.224962 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.235375 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.235474 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.399947 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.400026 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.660030 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" event={"ID":"f78750cc-a36b-4fae-b038-7e1288c3ba06","Type":"ContainerStarted","Data":"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc"} Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.660091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" event={"ID":"f78750cc-a36b-4fae-b038-7e1288c3ba06","Type":"ContainerStarted","Data":"78a1a143da8b8e50fd8843768bb9c213bdd456306c6d2f65232b2c8b560212c4"} Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.661416 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.666910 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:32:59 crc kubenswrapper[4812]: I0218 16:32:59.682347 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" podStartSLOduration=3.682333283 podStartE2EDuration="3.682333283s" podCreationTimestamp="2026-02-18 16:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:32:59.68062605 +0000 UTC m=+199.946236959" watchObservedRunningTime="2026-02-18 16:32:59.682333283 +0000 UTC m=+199.947944192" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.355566 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7jvz" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="registry-server" probeResult="failure" output=< Feb 18 16:33:00 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:33:00 crc kubenswrapper[4812]: > Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.359057 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-86cmv" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="registry-server" probeResult="failure" output=< Feb 18 16:33:00 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:33:00 crc kubenswrapper[4812]: > Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.443209 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l9xq4" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="registry-server" probeResult="failure" output=< Feb 18 16:33:00 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:33:00 crc kubenswrapper[4812]: > Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.516505 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35913594-c8bc-4e10-9d44-89fabf52cdd7" path="/var/lib/kubelet/pods/35913594-c8bc-4e10-9d44-89fabf52cdd7/volumes" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.517689 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3730c595-8d82-4ca2-91d5-4114f4b86728" path="/var/lib/kubelet/pods/3730c595-8d82-4ca2-91d5-4114f4b86728/volumes" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.595330 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 16:33:00 crc kubenswrapper[4812]: E0218 16:33:00.595610 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35913594-c8bc-4e10-9d44-89fabf52cdd7" containerName="controller-manager" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.595626 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="35913594-c8bc-4e10-9d44-89fabf52cdd7" containerName="controller-manager" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.595750 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="35913594-c8bc-4e10-9d44-89fabf52cdd7" containerName="controller-manager" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.596233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.598226 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.598987 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.609701 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.671031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.671246 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.772640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.772813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.772900 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.797353 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.893684 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.894863 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.897392 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.897595 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.897728 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.898166 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.898306 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.904656 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.907817 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.909075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.949534 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.975287 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.975338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.975377 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.975426 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrnp\" (UniqueName: \"kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:00 crc kubenswrapper[4812]: I0218 16:33:00.975460 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.077293 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrnp\" (UniqueName: \"kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.077356 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.077403 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.077425 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.077461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.079527 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.081157 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.085214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.124649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrnp\" (UniqueName: \"kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.160537 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca\") pod \"controller-manager-784f765dc7-wbx4t\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.162016 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.225205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.258563 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.258646 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.336223 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.453859 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.679952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" event={"ID":"52333fc3-471a-4ca6-ae6d-4db5d54941bf","Type":"ContainerStarted","Data":"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f"} Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.680019 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" event={"ID":"52333fc3-471a-4ca6-ae6d-4db5d54941bf","Type":"ContainerStarted","Data":"96b69c6b314f93eb8be574ee9d6fb29fa6805f63fdf1cc441df694543718e5a4"} Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.681929 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.689139 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.692017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a22deaa4-2059-417a-aebe-1e53bd57fc67","Type":"ContainerStarted","Data":"4347ccb165c46fc4b2e60493e028b5f1186a6edf037ff59b1aad56361aaa5be8"} Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.692051 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a22deaa4-2059-417a-aebe-1e53bd57fc67","Type":"ContainerStarted","Data":"128a9e1d93100252f2cf3158675fa367bc9871922820cf0c6806d5c339836c93"} Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.722606 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" podStartSLOduration=6.722577296 podStartE2EDuration="6.722577296s" podCreationTimestamp="2026-02-18 16:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:01.705121561 +0000 UTC m=+201.970732490" watchObservedRunningTime="2026-02-18 16:33:01.722577296 +0000 UTC m=+201.988188205" Feb 18 16:33:01 crc kubenswrapper[4812]: I0218 16:33:01.724499 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.724479385 podStartE2EDuration="1.724479385s" podCreationTimestamp="2026-02-18 16:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:01.719760258 +0000 UTC m=+201.985371167" watchObservedRunningTime="2026-02-18 16:33:01.724479385 +0000 UTC m=+201.990090294" Feb 18 16:33:02 crc kubenswrapper[4812]: I0218 16:33:02.228766 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:33:02 crc kubenswrapper[4812]: I0218 16:33:02.228839 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:33:02 crc kubenswrapper[4812]: I0218 16:33:02.698598 4812 generic.go:334] "Generic (PLEG): container finished" podID="a22deaa4-2059-417a-aebe-1e53bd57fc67" containerID="4347ccb165c46fc4b2e60493e028b5f1186a6edf037ff59b1aad56361aaa5be8" exitCode=0 Feb 18 16:33:02 crc kubenswrapper[4812]: I0218 16:33:02.698713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a22deaa4-2059-417a-aebe-1e53bd57fc67","Type":"ContainerDied","Data":"4347ccb165c46fc4b2e60493e028b5f1186a6edf037ff59b1aad56361aaa5be8"} Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.281105 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lvsc5" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" probeResult="failure" output=< Feb 18 16:33:03 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:33:03 crc kubenswrapper[4812]: > Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.414248 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.414641 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.414699 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.415430 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.415498 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3" gracePeriod=600 Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.711940 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3" exitCode=0 Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.712891 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3"} Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.712955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e"} Feb 18 16:33:03 crc kubenswrapper[4812]: I0218 16:33:03.990569 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.128404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir\") pod \"a22deaa4-2059-417a-aebe-1e53bd57fc67\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.128578 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access\") pod \"a22deaa4-2059-417a-aebe-1e53bd57fc67\" (UID: \"a22deaa4-2059-417a-aebe-1e53bd57fc67\") " Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.128584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a22deaa4-2059-417a-aebe-1e53bd57fc67" (UID: "a22deaa4-2059-417a-aebe-1e53bd57fc67"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.128895 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a22deaa4-2059-417a-aebe-1e53bd57fc67-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.138283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a22deaa4-2059-417a-aebe-1e53bd57fc67" (UID: "a22deaa4-2059-417a-aebe-1e53bd57fc67"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.229784 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22deaa4-2059-417a-aebe-1e53bd57fc67-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.721156 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.721112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a22deaa4-2059-417a-aebe-1e53bd57fc67","Type":"ContainerDied","Data":"128a9e1d93100252f2cf3158675fa367bc9871922820cf0c6806d5c339836c93"} Feb 18 16:33:04 crc kubenswrapper[4812]: I0218 16:33:04.721295 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="128a9e1d93100252f2cf3158675fa367bc9871922820cf0c6806d5c339836c93" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.191043 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 16:33:08 crc kubenswrapper[4812]: E0218 16:33:08.191932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22deaa4-2059-417a-aebe-1e53bd57fc67" containerName="pruner" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.191947 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22deaa4-2059-417a-aebe-1e53bd57fc67" containerName="pruner" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.192040 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22deaa4-2059-417a-aebe-1e53bd57fc67" containerName="pruner" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.193909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.202844 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.203192 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.206570 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.288305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.288565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.288840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.390647 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.390735 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.390788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.390819 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.390890 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.422902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access\") pod \"installer-9-crc\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:08 crc kubenswrapper[4812]: I0218 16:33:08.523606 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.716436 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.717190 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.722971 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.765697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.766294 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.767515 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:33:09 crc kubenswrapper[4812]: I0218 16:33:09.968763 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 16:33:10 crc kubenswrapper[4812]: I0218 16:33:10.659919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b6e4c967-d106-4af3-b4d2-813c0ea93021","Type":"ContainerStarted","Data":"af4477f0474f29fb5d157f3f0c68bf32441297afe6de02c582cb86adc04c5d7d"} Feb 18 16:33:11 crc kubenswrapper[4812]: I0218 16:33:11.307231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:11 crc kubenswrapper[4812]: I0218 16:33:11.895992 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:33:11 crc kubenswrapper[4812]: I0218 16:33:11.896451 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7jvz" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="registry-server" containerID="cri-o://8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c" gracePeriod=2 Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.094178 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.095341 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l9xq4" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="registry-server" containerID="cri-o://da474194ba4f5e6c1caced7e7221337f5b8a9c5fae285c6bd9e88f6efce2a580" gracePeriod=2 Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.267801 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.315008 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:33:12 crc kubenswrapper[4812]: E0218 16:33:12.395960 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55384caf_f9cf_4c69_978f_4f27c2a0aec0.slice/crio-conmon-8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.678593 4812 generic.go:334] "Generic (PLEG): container finished" podID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerID="8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c" exitCode=0 Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.678698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerDied","Data":"8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c"} Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.684654 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerID="da474194ba4f5e6c1caced7e7221337f5b8a9c5fae285c6bd9e88f6efce2a580" exitCode=0 Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.684667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerDied","Data":"da474194ba4f5e6c1caced7e7221337f5b8a9c5fae285c6bd9e88f6efce2a580"} Feb 18 16:33:12 crc kubenswrapper[4812]: I0218 16:33:12.689194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerStarted","Data":"358d0c5391c6e485294e05de2bb5a945009b0fd8e56ec9df94bfced4f5d6b621"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.357051 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.453122 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.459616 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content\") pod \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.459817 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjllp\" (UniqueName: \"kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp\") pod \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.459870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities\") pod \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\" (UID: \"55384caf-f9cf-4c69-978f-4f27c2a0aec0\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.461231 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities" (OuterVolumeSpecName: "utilities") pod "55384caf-f9cf-4c69-978f-4f27c2a0aec0" (UID: "55384caf-f9cf-4c69-978f-4f27c2a0aec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.472775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp" (OuterVolumeSpecName: "kube-api-access-wjllp") pod "55384caf-f9cf-4c69-978f-4f27c2a0aec0" (UID: "55384caf-f9cf-4c69-978f-4f27c2a0aec0"). InnerVolumeSpecName "kube-api-access-wjllp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.524539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55384caf-f9cf-4c69-978f-4f27c2a0aec0" (UID: "55384caf-f9cf-4c69-978f-4f27c2a0aec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.560802 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities\") pod \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.560868 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content\") pod \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.560970 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z\") pod \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\" (UID: \"1b91248b-ae50-4abe-8e1d-f7c6495e7d85\") " Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.561670 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities" (OuterVolumeSpecName: "utilities") pod "1b91248b-ae50-4abe-8e1d-f7c6495e7d85" (UID: "1b91248b-ae50-4abe-8e1d-f7c6495e7d85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.561961 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.561986 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55384caf-f9cf-4c69-978f-4f27c2a0aec0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.561996 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.562006 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjllp\" (UniqueName: \"kubernetes.io/projected/55384caf-f9cf-4c69-978f-4f27c2a0aec0-kube-api-access-wjllp\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.566438 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z" (OuterVolumeSpecName: "kube-api-access-6kv8z") pod "1b91248b-ae50-4abe-8e1d-f7c6495e7d85" (UID: "1b91248b-ae50-4abe-8e1d-f7c6495e7d85"). InnerVolumeSpecName "kube-api-access-6kv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.623219 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b91248b-ae50-4abe-8e1d-f7c6495e7d85" (UID: "1b91248b-ae50-4abe-8e1d-f7c6495e7d85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.663237 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.663282 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv8z\" (UniqueName: \"kubernetes.io/projected/1b91248b-ae50-4abe-8e1d-f7c6495e7d85-kube-api-access-6kv8z\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.704609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerStarted","Data":"4fb78d09a7273e6dcc8765c0b389f54d7acfb4e4f506224189a9008c765ca127"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.709149 4812 generic.go:334] "Generic (PLEG): container finished" podID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerID="5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe" exitCode=0 Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.709249 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerDied","Data":"5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.713983 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b6e4c967-d106-4af3-b4d2-813c0ea93021","Type":"ContainerStarted","Data":"1cc78efa063c71320e36ae119b0b2a507b29357e9ccad2c0bc9ec33f9bb728b1"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.722230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9xq4" event={"ID":"1b91248b-ae50-4abe-8e1d-f7c6495e7d85","Type":"ContainerDied","Data":"e49dab86d9882e74e892ee7fa8dea3597b07869028aef372953e565d355691ae"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.722302 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9xq4" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.722309 4812 scope.go:117] "RemoveContainer" containerID="da474194ba4f5e6c1caced7e7221337f5b8a9c5fae285c6bd9e88f6efce2a580" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.727252 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e363386-369f-49c6-8412-c72c0c3a0433" containerID="358d0c5391c6e485294e05de2bb5a945009b0fd8e56ec9df94bfced4f5d6b621" exitCode=0 Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.727357 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerDied","Data":"358d0c5391c6e485294e05de2bb5a945009b0fd8e56ec9df94bfced4f5d6b621"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.732525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7jvz" event={"ID":"55384caf-f9cf-4c69-978f-4f27c2a0aec0","Type":"ContainerDied","Data":"808ed2b88098a7143c62793121d1daa869aba5ce1cbc5f781a97a7cecf2a6841"} Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.732727 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7jvz" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.754234 4812 scope.go:117] "RemoveContainer" containerID="e916a9e745de1ec5d1575026b683afb076f377985168dde2341d1640bfc1f6b1" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.791724 4812 scope.go:117] "RemoveContainer" containerID="e42f840ff70cf9859823032ba3ce19bfdf730b3589817c4eb83c5a3c361b7750" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.811733 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.811706492 podStartE2EDuration="5.811706492s" podCreationTimestamp="2026-02-18 16:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:13.807017285 +0000 UTC m=+214.072628194" watchObservedRunningTime="2026-02-18 16:33:13.811706492 +0000 UTC m=+214.077317411" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.812475 4812 scope.go:117] "RemoveContainer" containerID="8658002595e5ed12e9e82922df7fcd9258cecea36075e337740eae790c74e96c" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.830913 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.837906 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7jvz"] Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.848774 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.852953 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l9xq4"] Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.891621 4812 scope.go:117] "RemoveContainer" containerID="7669e7a6805c730aaa644fe5e70ad4291257da83d9db4b2226a4a86ef845174d" Feb 18 16:33:13 crc kubenswrapper[4812]: I0218 16:33:13.908993 4812 scope.go:117] "RemoveContainer" containerID="834e413691f1e162ef386f7418d0966ee38ec5ec407fd8911fd50866b7a751a6" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.296790 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.297510 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmn4k" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="registry-server" containerID="cri-o://f797d60019c86bd79d910e4d6fa5c49fce67349b08fa7ee8c0a4fe236f7bf822" gracePeriod=2 Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.517951 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" path="/var/lib/kubelet/pods/1b91248b-ae50-4abe-8e1d-f7c6495e7d85/volumes" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.518596 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" path="/var/lib/kubelet/pods/55384caf-f9cf-4c69-978f-4f27c2a0aec0/volumes" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.739732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerStarted","Data":"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8"} Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.744620 4812 generic.go:334] "Generic (PLEG): container finished" podID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerID="f797d60019c86bd79d910e4d6fa5c49fce67349b08fa7ee8c0a4fe236f7bf822" exitCode=0 Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.744669 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerDied","Data":"f797d60019c86bd79d910e4d6fa5c49fce67349b08fa7ee8c0a4fe236f7bf822"} Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.744687 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmn4k" event={"ID":"04541d04-68f3-49c9-abb1-4feecceacbd6","Type":"ContainerDied","Data":"3baaff127a8a87b931e00727b3b9dfa770c4c27e780b1df21cd04f66a37d3f83"} Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.744700 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3baaff127a8a87b931e00727b3b9dfa770c4c27e780b1df21cd04f66a37d3f83" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.748632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerStarted","Data":"83874e828b4a38563a197437639c69beda44f4712d61823cd2fd99e49e10d526"} Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.752214 4812 generic.go:334] "Generic (PLEG): container finished" podID="3913399c-b196-44e0-a381-0526a310bb4b" containerID="4fb78d09a7273e6dcc8765c0b389f54d7acfb4e4f506224189a9008c765ca127" exitCode=0 Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.752479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerDied","Data":"4fb78d09a7273e6dcc8765c0b389f54d7acfb4e4f506224189a9008c765ca127"} Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.768739 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.771007 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-st44b" podStartSLOduration=2.389705606 podStartE2EDuration="54.770968724s" podCreationTimestamp="2026-02-18 16:32:20 +0000 UTC" firstStartedPulling="2026-02-18 16:32:21.731008216 +0000 UTC m=+161.996619125" lastFinishedPulling="2026-02-18 16:33:14.112271334 +0000 UTC m=+214.377882243" observedRunningTime="2026-02-18 16:33:14.768748015 +0000 UTC m=+215.034358934" watchObservedRunningTime="2026-02-18 16:33:14.770968724 +0000 UTC m=+215.036579633" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.819200 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mzvbp" podStartSLOduration=2.594995931 podStartE2EDuration="52.819177739s" podCreationTimestamp="2026-02-18 16:32:22 +0000 UTC" firstStartedPulling="2026-02-18 16:32:23.944684542 +0000 UTC m=+164.210295451" lastFinishedPulling="2026-02-18 16:33:14.16886635 +0000 UTC m=+214.434477259" observedRunningTime="2026-02-18 16:33:14.817751414 +0000 UTC m=+215.083362323" watchObservedRunningTime="2026-02-18 16:33:14.819177739 +0000 UTC m=+215.084788648" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.888706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content\") pod \"04541d04-68f3-49c9-abb1-4feecceacbd6\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.888772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities\") pod \"04541d04-68f3-49c9-abb1-4feecceacbd6\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.888907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dr5t\" (UniqueName: \"kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t\") pod \"04541d04-68f3-49c9-abb1-4feecceacbd6\" (UID: \"04541d04-68f3-49c9-abb1-4feecceacbd6\") " Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.890077 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities" (OuterVolumeSpecName: "utilities") pod "04541d04-68f3-49c9-abb1-4feecceacbd6" (UID: "04541d04-68f3-49c9-abb1-4feecceacbd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.898366 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t" (OuterVolumeSpecName: "kube-api-access-2dr5t") pod "04541d04-68f3-49c9-abb1-4feecceacbd6" (UID: "04541d04-68f3-49c9-abb1-4feecceacbd6"). InnerVolumeSpecName "kube-api-access-2dr5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.925056 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04541d04-68f3-49c9-abb1-4feecceacbd6" (UID: "04541d04-68f3-49c9-abb1-4feecceacbd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.991002 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dr5t\" (UniqueName: \"kubernetes.io/projected/04541d04-68f3-49c9-abb1-4feecceacbd6-kube-api-access-2dr5t\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.991059 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:14 crc kubenswrapper[4812]: I0218 16:33:14.991144 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04541d04-68f3-49c9-abb1-4feecceacbd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:15 crc kubenswrapper[4812]: I0218 16:33:15.760256 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerStarted","Data":"4ad13f0d84028f152cbf9b5f9119076f5f31eb24354cd3f1268a8a4eb783be14"} Feb 18 16:33:15 crc kubenswrapper[4812]: I0218 16:33:15.760318 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmn4k" Feb 18 16:33:15 crc kubenswrapper[4812]: I0218 16:33:15.784154 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4k6w9" podStartSLOduration=3.254881306 podStartE2EDuration="57.784123259s" podCreationTimestamp="2026-02-18 16:32:18 +0000 UTC" firstStartedPulling="2026-02-18 16:32:20.663769165 +0000 UTC m=+160.929380074" lastFinishedPulling="2026-02-18 16:33:15.193011108 +0000 UTC m=+215.458622027" observedRunningTime="2026-02-18 16:33:15.78160767 +0000 UTC m=+216.047218589" watchObservedRunningTime="2026-02-18 16:33:15.784123259 +0000 UTC m=+216.049734168" Feb 18 16:33:15 crc kubenswrapper[4812]: I0218 16:33:15.796320 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:33:15 crc kubenswrapper[4812]: I0218 16:33:15.799278 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmn4k"] Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.009219 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.009740 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" podUID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" containerName="controller-manager" containerID="cri-o://f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f" gracePeriod=30 Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.030506 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.030783 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" podUID="f78750cc-a36b-4fae-b038-7e1288c3ba06" containerName="route-controller-manager" containerID="cri-o://acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc" gracePeriod=30 Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.524838 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" path="/var/lib/kubelet/pods/04541d04-68f3-49c9-abb1-4feecceacbd6/volumes" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.548298 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.648129 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.713512 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dswc5\" (UniqueName: \"kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5\") pod \"f78750cc-a36b-4fae-b038-7e1288c3ba06\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.713579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config\") pod \"f78750cc-a36b-4fae-b038-7e1288c3ba06\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.713624 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert\") pod \"f78750cc-a36b-4fae-b038-7e1288c3ba06\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.713654 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca\") pod \"f78750cc-a36b-4fae-b038-7e1288c3ba06\" (UID: \"f78750cc-a36b-4fae-b038-7e1288c3ba06\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.714570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca" (OuterVolumeSpecName: "client-ca") pod "f78750cc-a36b-4fae-b038-7e1288c3ba06" (UID: "f78750cc-a36b-4fae-b038-7e1288c3ba06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.714756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config" (OuterVolumeSpecName: "config") pod "f78750cc-a36b-4fae-b038-7e1288c3ba06" (UID: "f78750cc-a36b-4fae-b038-7e1288c3ba06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.721926 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5" (OuterVolumeSpecName: "kube-api-access-dswc5") pod "f78750cc-a36b-4fae-b038-7e1288c3ba06" (UID: "f78750cc-a36b-4fae-b038-7e1288c3ba06"). InnerVolumeSpecName "kube-api-access-dswc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.724334 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f78750cc-a36b-4fae-b038-7e1288c3ba06" (UID: "f78750cc-a36b-4fae-b038-7e1288c3ba06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.768141 4812 generic.go:334] "Generic (PLEG): container finished" podID="f78750cc-a36b-4fae-b038-7e1288c3ba06" containerID="acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc" exitCode=0 Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.768197 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.768270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" event={"ID":"f78750cc-a36b-4fae-b038-7e1288c3ba06","Type":"ContainerDied","Data":"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc"} Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.769324 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2" event={"ID":"f78750cc-a36b-4fae-b038-7e1288c3ba06","Type":"ContainerDied","Data":"78a1a143da8b8e50fd8843768bb9c213bdd456306c6d2f65232b2c8b560212c4"} Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.769389 4812 scope.go:117] "RemoveContainer" containerID="acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.770891 4812 generic.go:334] "Generic (PLEG): container finished" podID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" containerID="f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f" exitCode=0 Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.770935 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" event={"ID":"52333fc3-471a-4ca6-ae6d-4db5d54941bf","Type":"ContainerDied","Data":"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f"} Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.770967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" event={"ID":"52333fc3-471a-4ca6-ae6d-4db5d54941bf","Type":"ContainerDied","Data":"96b69c6b314f93eb8be574ee9d6fb29fa6805f63fdf1cc441df694543718e5a4"} Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.771015 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784f765dc7-wbx4t" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.790637 4812 scope.go:117] "RemoveContainer" containerID="acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc" Feb 18 16:33:16 crc kubenswrapper[4812]: E0218 16:33:16.791209 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc\": container with ID starting with acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc not found: ID does not exist" containerID="acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.791282 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc"} err="failed to get container status \"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc\": rpc error: code = NotFound desc = could not find container \"acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc\": container with ID starting with acb7efb648a3a9d953ccd9a49ec5707ac663431680ad8f062a92ea21a83505dc not found: ID does not exist" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.791320 4812 scope.go:117] "RemoveContainer" containerID="f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814167 4812 scope.go:117] "RemoveContainer" containerID="f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814661 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config\") pod \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814722 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert\") pod \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrnp\" (UniqueName: \"kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp\") pod \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca\") pod \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.814911 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles\") pod \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\" (UID: \"52333fc3-471a-4ca6-ae6d-4db5d54941bf\") " Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.815278 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.815305 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f78750cc-a36b-4fae-b038-7e1288c3ba06-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.815318 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f78750cc-a36b-4fae-b038-7e1288c3ba06-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.815333 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dswc5\" (UniqueName: \"kubernetes.io/projected/f78750cc-a36b-4fae-b038-7e1288c3ba06-kube-api-access-dswc5\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.816344 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "52333fc3-471a-4ca6-ae6d-4db5d54941bf" (UID: "52333fc3-471a-4ca6-ae6d-4db5d54941bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.816672 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "52333fc3-471a-4ca6-ae6d-4db5d54941bf" (UID: "52333fc3-471a-4ca6-ae6d-4db5d54941bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: E0218 16:33:16.816835 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f\": container with ID starting with f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f not found: ID does not exist" containerID="f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.816881 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f"} err="failed to get container status \"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f\": rpc error: code = NotFound desc = could not find container \"f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f\": container with ID starting with f8eec48ca33e7741a3f2a3fb528c097d075ab1615b8b6cfcbf4498a2f6b1763f not found: ID does not exist" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.817707 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config" (OuterVolumeSpecName: "config") pod "52333fc3-471a-4ca6-ae6d-4db5d54941bf" (UID: "52333fc3-471a-4ca6-ae6d-4db5d54941bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.817991 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.821449 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "52333fc3-471a-4ca6-ae6d-4db5d54941bf" (UID: "52333fc3-471a-4ca6-ae6d-4db5d54941bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.822390 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp" (OuterVolumeSpecName: "kube-api-access-pwrnp") pod "52333fc3-471a-4ca6-ae6d-4db5d54941bf" (UID: "52333fc3-471a-4ca6-ae6d-4db5d54941bf"). InnerVolumeSpecName "kube-api-access-pwrnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.829490 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d9b4f67-4slp2"] Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.916805 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.916841 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.916855 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52333fc3-471a-4ca6-ae6d-4db5d54941bf-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.916864 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52333fc3-471a-4ca6-ae6d-4db5d54941bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:16 crc kubenswrapper[4812]: I0218 16:33:16.916874 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrnp\" (UniqueName: \"kubernetes.io/projected/52333fc3-471a-4ca6-ae6d-4db5d54941bf-kube-api-access-pwrnp\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.098229 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.101117 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-784f765dc7-wbx4t"] Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.906873 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907268 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907287 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907304 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907313 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907327 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907336 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907348 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907356 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907367 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907375 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907386 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907397 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907411 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907419 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907432 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907441 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="extract-content" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907454 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" containerName="controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907465 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" containerName="controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907476 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78750cc-a36b-4fae-b038-7e1288c3ba06" containerName="route-controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907486 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78750cc-a36b-4fae-b038-7e1288c3ba06" containerName="route-controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: E0218 16:33:17.907503 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907511 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="extract-utilities" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907636 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" containerName="controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907653 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b91248b-ae50-4abe-8e1d-f7c6495e7d85" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907665 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04541d04-68f3-49c9-abb1-4feecceacbd6" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907680 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78750cc-a36b-4fae-b038-7e1288c3ba06" containerName="route-controller-manager" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.907691 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="55384caf-f9cf-4c69-978f-4f27c2a0aec0" containerName="registry-server" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.908278 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.911879 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.912206 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.912879 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.912915 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.913087 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.914317 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.915431 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.916544 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.916576 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.916941 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.916963 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.917044 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.918308 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.918781 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.920910 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.924520 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:17 crc kubenswrapper[4812]: I0218 16:33:17.928475 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.033999 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034091 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034206 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034286 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034333 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhht\" (UniqueName: \"kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.034384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6w7\" (UniqueName: \"kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vhht\" (UniqueName: \"kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135659 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135708 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6w7\" (UniqueName: \"kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135809 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135857 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.135917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.137494 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.138005 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.138755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.138969 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.149255 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.149322 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.151041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.169546 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vhht\" (UniqueName: \"kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht\") pod \"route-controller-manager-86cf684d79-ddwg5\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.175493 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6w7\" (UniqueName: \"kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7\") pod \"controller-manager-855c99ddf-v87pt\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.235969 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.253801 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.465141 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:18 crc kubenswrapper[4812]: W0218 16:33:18.473491 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cbe2efb_bdff_440e_940f_5e7c8c0b9a12.slice/crio-2d66552cf3cb775c698bf96e4c82cf499ca1743bdcfa7720877bcf026e8678ee WatchSource:0}: Error finding container 2d66552cf3cb775c698bf96e4c82cf499ca1743bdcfa7720877bcf026e8678ee: Status 404 returned error can't find the container with id 2d66552cf3cb775c698bf96e4c82cf499ca1743bdcfa7720877bcf026e8678ee Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.517534 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52333fc3-471a-4ca6-ae6d-4db5d54941bf" path="/var/lib/kubelet/pods/52333fc3-471a-4ca6-ae6d-4db5d54941bf/volumes" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.518329 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78750cc-a36b-4fae-b038-7e1288c3ba06" path="/var/lib/kubelet/pods/f78750cc-a36b-4fae-b038-7e1288c3ba06/volumes" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.763486 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.788910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" event={"ID":"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12","Type":"ContainerStarted","Data":"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4"} Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.788988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" event={"ID":"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12","Type":"ContainerStarted","Data":"2d66552cf3cb775c698bf96e4c82cf499ca1743bdcfa7720877bcf026e8678ee"} Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.790306 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.792500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" event={"ID":"0aee388d-2922-4b01-be02-02c693050c54","Type":"ContainerStarted","Data":"f0bb6a48efaef133801e50880171386584da36abf50b28b42d7b31b20b76a1b3"} Feb 18 16:33:18 crc kubenswrapper[4812]: I0218 16:33:18.810209 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" podStartSLOduration=2.810185953 podStartE2EDuration="2.810185953s" podCreationTimestamp="2026-02-18 16:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:18.807960174 +0000 UTC m=+219.073571083" watchObservedRunningTime="2026-02-18 16:33:18.810185953 +0000 UTC m=+219.075796852" Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.134695 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.134969 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.141236 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.257116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.801040 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" event={"ID":"0aee388d-2922-4b01-be02-02c693050c54","Type":"ContainerStarted","Data":"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b"} Feb 18 16:33:19 crc kubenswrapper[4812]: I0218 16:33:19.819979 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" podStartSLOduration=3.819955872 podStartE2EDuration="3.819955872s" podCreationTimestamp="2026-02-18 16:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:19.817943059 +0000 UTC m=+220.083553978" watchObservedRunningTime="2026-02-18 16:33:19.819955872 +0000 UTC m=+220.085566771" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.809578 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.815862 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.855386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.876992 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.877055 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:33:20 crc kubenswrapper[4812]: I0218 16:33:20.934205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:33:21 crc kubenswrapper[4812]: I0218 16:33:21.865856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:33:22 crc kubenswrapper[4812]: I0218 16:33:22.570865 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:22 crc kubenswrapper[4812]: I0218 16:33:22.571532 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:22 crc kubenswrapper[4812]: I0218 16:33:22.636976 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:22 crc kubenswrapper[4812]: I0218 16:33:22.883013 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:24 crc kubenswrapper[4812]: I0218 16:33:24.699430 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:33:24 crc kubenswrapper[4812]: I0218 16:33:24.839585 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mzvbp" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="registry-server" containerID="cri-o://83874e828b4a38563a197437639c69beda44f4712d61823cd2fd99e49e10d526" gracePeriod=2 Feb 18 16:33:25 crc kubenswrapper[4812]: I0218 16:33:25.854434 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e363386-369f-49c6-8412-c72c0c3a0433" containerID="83874e828b4a38563a197437639c69beda44f4712d61823cd2fd99e49e10d526" exitCode=0 Feb 18 16:33:25 crc kubenswrapper[4812]: I0218 16:33:25.854487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerDied","Data":"83874e828b4a38563a197437639c69beda44f4712d61823cd2fd99e49e10d526"} Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.155784 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.274381 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhdz\" (UniqueName: \"kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz\") pod \"3e363386-369f-49c6-8412-c72c0c3a0433\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.274532 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities\") pod \"3e363386-369f-49c6-8412-c72c0c3a0433\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.274652 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content\") pod \"3e363386-369f-49c6-8412-c72c0c3a0433\" (UID: \"3e363386-369f-49c6-8412-c72c0c3a0433\") " Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.277156 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities" (OuterVolumeSpecName: "utilities") pod "3e363386-369f-49c6-8412-c72c0c3a0433" (UID: "3e363386-369f-49c6-8412-c72c0c3a0433"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.289680 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz" (OuterVolumeSpecName: "kube-api-access-qqhdz") pod "3e363386-369f-49c6-8412-c72c0c3a0433" (UID: "3e363386-369f-49c6-8412-c72c0c3a0433"). InnerVolumeSpecName "kube-api-access-qqhdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.377091 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhdz\" (UniqueName: \"kubernetes.io/projected/3e363386-369f-49c6-8412-c72c0c3a0433-kube-api-access-qqhdz\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.377217 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.496273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e363386-369f-49c6-8412-c72c0c3a0433" (UID: "3e363386-369f-49c6-8412-c72c0c3a0433"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.581491 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e363386-369f-49c6-8412-c72c0c3a0433-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.868088 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzvbp" event={"ID":"3e363386-369f-49c6-8412-c72c0c3a0433","Type":"ContainerDied","Data":"b53047a0176796ee2c08bada30304be7d55a4f7e648ed08a8c6d98c8a4235759"} Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.868248 4812 scope.go:117] "RemoveContainer" containerID="83874e828b4a38563a197437639c69beda44f4712d61823cd2fd99e49e10d526" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.868290 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzvbp" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.902575 4812 scope.go:117] "RemoveContainer" containerID="358d0c5391c6e485294e05de2bb5a945009b0fd8e56ec9df94bfced4f5d6b621" Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.905769 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.920417 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mzvbp"] Feb 18 16:33:26 crc kubenswrapper[4812]: I0218 16:33:26.933678 4812 scope.go:117] "RemoveContainer" containerID="310140a006b940aaefb1d5c7de0d6920df99551bf2921d37c5c00ae5ae938ea5" Feb 18 16:33:28 crc kubenswrapper[4812]: I0218 16:33:28.515371 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" path="/var/lib/kubelet/pods/3e363386-369f-49c6-8412-c72c0c3a0433/volumes" Feb 18 16:33:30 crc kubenswrapper[4812]: I0218 16:33:30.959648 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pm7xx"] Feb 18 16:33:35 crc kubenswrapper[4812]: I0218 16:33:35.984825 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:35 crc kubenswrapper[4812]: I0218 16:33:35.985368 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" podUID="0aee388d-2922-4b01-be02-02c693050c54" containerName="controller-manager" containerID="cri-o://49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b" gracePeriod=30 Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.077466 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.077807 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" podUID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" containerName="route-controller-manager" containerID="cri-o://ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4" gracePeriod=30 Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.535234 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.598213 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.638767 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert\") pod \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.638886 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vhht\" (UniqueName: \"kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht\") pod \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.638935 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6w7\" (UniqueName: \"kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7\") pod \"0aee388d-2922-4b01-be02-02c693050c54\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.638969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config\") pod \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.639015 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca\") pod \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\" (UID: \"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.639048 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert\") pod \"0aee388d-2922-4b01-be02-02c693050c54\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.639083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles\") pod \"0aee388d-2922-4b01-be02-02c693050c54\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.639975 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca" (OuterVolumeSpecName: "client-ca") pod "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" (UID: "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.640088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config" (OuterVolumeSpecName: "config") pod "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" (UID: "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.640406 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0aee388d-2922-4b01-be02-02c693050c54" (UID: "0aee388d-2922-4b01-be02-02c693050c54"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.647240 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" (UID: "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.647324 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0aee388d-2922-4b01-be02-02c693050c54" (UID: "0aee388d-2922-4b01-be02-02c693050c54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.647389 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht" (OuterVolumeSpecName: "kube-api-access-4vhht") pod "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" (UID: "6cbe2efb-bdff-440e-940f-5e7c8c0b9a12"). InnerVolumeSpecName "kube-api-access-4vhht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.647413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7" (OuterVolumeSpecName: "kube-api-access-wn6w7") pod "0aee388d-2922-4b01-be02-02c693050c54" (UID: "0aee388d-2922-4b01-be02-02c693050c54"). InnerVolumeSpecName "kube-api-access-wn6w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740069 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config\") pod \"0aee388d-2922-4b01-be02-02c693050c54\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca\") pod \"0aee388d-2922-4b01-be02-02c693050c54\" (UID: \"0aee388d-2922-4b01-be02-02c693050c54\") " Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740302 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740315 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740328 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aee388d-2922-4b01-be02-02c693050c54-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740338 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740350 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740359 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vhht\" (UniqueName: \"kubernetes.io/projected/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12-kube-api-access-4vhht\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740369 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6w7\" (UniqueName: \"kubernetes.io/projected/0aee388d-2922-4b01-be02-02c693050c54-kube-api-access-wn6w7\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740712 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca" (OuterVolumeSpecName: "client-ca") pod "0aee388d-2922-4b01-be02-02c693050c54" (UID: "0aee388d-2922-4b01-be02-02c693050c54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.740791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config" (OuterVolumeSpecName: "config") pod "0aee388d-2922-4b01-be02-02c693050c54" (UID: "0aee388d-2922-4b01-be02-02c693050c54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.841558 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.841603 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aee388d-2922-4b01-be02-02c693050c54-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.953312 4812 generic.go:334] "Generic (PLEG): container finished" podID="0aee388d-2922-4b01-be02-02c693050c54" containerID="49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b" exitCode=0 Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.953395 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" event={"ID":"0aee388d-2922-4b01-be02-02c693050c54","Type":"ContainerDied","Data":"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b"} Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.953431 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" event={"ID":"0aee388d-2922-4b01-be02-02c693050c54","Type":"ContainerDied","Data":"f0bb6a48efaef133801e50880171386584da36abf50b28b42d7b31b20b76a1b3"} Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.953469 4812 scope.go:117] "RemoveContainer" containerID="49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.953633 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855c99ddf-v87pt" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.956517 4812 generic.go:334] "Generic (PLEG): container finished" podID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" containerID="ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4" exitCode=0 Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.956549 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" event={"ID":"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12","Type":"ContainerDied","Data":"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4"} Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.956572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" event={"ID":"6cbe2efb-bdff-440e-940f-5e7c8c0b9a12","Type":"ContainerDied","Data":"2d66552cf3cb775c698bf96e4c82cf499ca1743bdcfa7720877bcf026e8678ee"} Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.956616 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.980259 4812 scope.go:117] "RemoveContainer" containerID="49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b" Feb 18 16:33:36 crc kubenswrapper[4812]: E0218 16:33:36.982235 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b\": container with ID starting with 49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b not found: ID does not exist" containerID="49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.982345 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b"} err="failed to get container status \"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b\": rpc error: code = NotFound desc = could not find container \"49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b\": container with ID starting with 49ab5f53645294b06006e769acfd451e74c10bba343269a9c4bb969fc069b19b not found: ID does not exist" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.982405 4812 scope.go:117] "RemoveContainer" containerID="ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4" Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.995527 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:36 crc kubenswrapper[4812]: I0218 16:33:36.999735 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf684d79-ddwg5"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.007467 4812 scope.go:117] "RemoveContainer" containerID="ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4" Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.008392 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4\": container with ID starting with ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4 not found: ID does not exist" containerID="ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.008478 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4"} err="failed to get container status \"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4\": rpc error: code = NotFound desc = could not find container \"ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4\": container with ID starting with ad6ab491d0968477cd5a92f12e9e6506aac5815a021037cb3ba2d51b238344c4 not found: ID does not exist" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.011886 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.014613 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-855c99ddf-v87pt"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925073 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp"] Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.925486 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="registry-server" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925510 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="registry-server" Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.925532 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" containerName="route-controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925545 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" containerName="route-controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.925560 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="extract-content" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925574 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="extract-content" Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.925601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aee388d-2922-4b01-be02-02c693050c54" containerName="controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925614 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aee388d-2922-4b01-be02-02c693050c54" containerName="controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: E0218 16:33:37.925638 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="extract-utilities" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925652 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="extract-utilities" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925823 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aee388d-2922-4b01-be02-02c693050c54" containerName="controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925846 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e363386-369f-49c6-8412-c72c0c3a0433" containerName="registry-server" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.925872 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" containerName="route-controller-manager" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.926575 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.930056 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.930360 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.931024 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.932444 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.932449 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.935270 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.936605 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b9d765846-whpw8"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.938285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.943256 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.943434 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.944269 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.944571 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.946899 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.947341 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.953041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.957210 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.960786 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9d765846-whpw8"] Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.963400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-config\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.963540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-client-ca\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.963824 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qkc\" (UniqueName: \"kubernetes.io/projected/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-kube-api-access-l6qkc\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.963958 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-serving-cert\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.964065 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdh2\" (UniqueName: \"kubernetes.io/projected/de3fc844-77f9-440f-924a-ceef65d743dd-kube-api-access-qkdh2\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.964135 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-proxy-ca-bundles\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.964644 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de3fc844-77f9-440f-924a-ceef65d743dd-serving-cert\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.964753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-config\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:37 crc kubenswrapper[4812]: I0218 16:33:37.964788 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-client-ca\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.066717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-config\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.066791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-client-ca\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.066879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-config\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.066933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-client-ca\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.066995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6qkc\" (UniqueName: \"kubernetes.io/projected/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-kube-api-access-l6qkc\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.067058 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-serving-cert\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.067170 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdh2\" (UniqueName: \"kubernetes.io/projected/de3fc844-77f9-440f-924a-ceef65d743dd-kube-api-access-qkdh2\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.067217 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-proxy-ca-bundles\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.067326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de3fc844-77f9-440f-924a-ceef65d743dd-serving-cert\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.069056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-config\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.069152 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-client-ca\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.069445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-config\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.069705 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de3fc844-77f9-440f-924a-ceef65d743dd-client-ca\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.071746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-proxy-ca-bundles\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.077343 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-serving-cert\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.078220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de3fc844-77f9-440f-924a-ceef65d743dd-serving-cert\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.092984 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdh2\" (UniqueName: \"kubernetes.io/projected/de3fc844-77f9-440f-924a-ceef65d743dd-kube-api-access-qkdh2\") pod \"route-controller-manager-76c8884dd6-cldtp\" (UID: \"de3fc844-77f9-440f-924a-ceef65d743dd\") " pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.096845 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6qkc\" (UniqueName: \"kubernetes.io/projected/8714e1e8-ad75-4943-81d0-d63e6ff0fb19-kube-api-access-l6qkc\") pod \"controller-manager-7b9d765846-whpw8\" (UID: \"8714e1e8-ad75-4943-81d0-d63e6ff0fb19\") " pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.274338 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.286966 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.516177 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aee388d-2922-4b01-be02-02c693050c54" path="/var/lib/kubelet/pods/0aee388d-2922-4b01-be02-02c693050c54/volumes" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.516998 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbe2efb-bdff-440e-940f-5e7c8c0b9a12" path="/var/lib/kubelet/pods/6cbe2efb-bdff-440e-940f-5e7c8c0b9a12/volumes" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.575752 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9d765846-whpw8"] Feb 18 16:33:38 crc kubenswrapper[4812]: W0218 16:33:38.581767 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8714e1e8_ad75_4943_81d0_d63e6ff0fb19.slice/crio-b46ca2b29b1ad29a748ec10c8a6063e6d7ba8c78e150a7356645b20945346dab WatchSource:0}: Error finding container b46ca2b29b1ad29a748ec10c8a6063e6d7ba8c78e150a7356645b20945346dab: Status 404 returned error can't find the container with id b46ca2b29b1ad29a748ec10c8a6063e6d7ba8c78e150a7356645b20945346dab Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.730643 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp"] Feb 18 16:33:38 crc kubenswrapper[4812]: W0218 16:33:38.738893 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3fc844_77f9_440f_924a_ceef65d743dd.slice/crio-a216982cc738ebcc599f7af3bade409adc1e60644c632e426f8cff8472bb5958 WatchSource:0}: Error finding container a216982cc738ebcc599f7af3bade409adc1e60644c632e426f8cff8472bb5958: Status 404 returned error can't find the container with id a216982cc738ebcc599f7af3bade409adc1e60644c632e426f8cff8472bb5958 Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.977890 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" event={"ID":"8714e1e8-ad75-4943-81d0-d63e6ff0fb19","Type":"ContainerStarted","Data":"f141fc2aebf083223172bc74b484a6b9827686b3e789ff2ed6d3c0e39ca606c5"} Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.977945 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" event={"ID":"8714e1e8-ad75-4943-81d0-d63e6ff0fb19","Type":"ContainerStarted","Data":"b46ca2b29b1ad29a748ec10c8a6063e6d7ba8c78e150a7356645b20945346dab"} Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.978159 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.980382 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" event={"ID":"de3fc844-77f9-440f-924a-ceef65d743dd","Type":"ContainerStarted","Data":"32ca853ff65541d9a9e0b411a436d2e6907741df423b5ae298f5378c164f4d09"} Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.980470 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" event={"ID":"de3fc844-77f9-440f-924a-ceef65d743dd","Type":"ContainerStarted","Data":"a216982cc738ebcc599f7af3bade409adc1e60644c632e426f8cff8472bb5958"} Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.980616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.982200 4812 patch_prober.go:28] interesting pod/route-controller-manager-76c8884dd6-cldtp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 18 16:33:38 crc kubenswrapper[4812]: I0218 16:33:38.982256 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" podUID="de3fc844-77f9-440f-924a-ceef65d743dd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 18 16:33:39 crc kubenswrapper[4812]: I0218 16:33:39.006450 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" podStartSLOduration=3.00643103 podStartE2EDuration="3.00643103s" podCreationTimestamp="2026-02-18 16:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:39.003108096 +0000 UTC m=+239.268719015" watchObservedRunningTime="2026-02-18 16:33:39.00643103 +0000 UTC m=+239.272041949" Feb 18 16:33:39 crc kubenswrapper[4812]: I0218 16:33:39.039125 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" podStartSLOduration=3.039085139 podStartE2EDuration="3.039085139s" podCreationTimestamp="2026-02-18 16:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:33:39.037246612 +0000 UTC m=+239.302857541" watchObservedRunningTime="2026-02-18 16:33:39.039085139 +0000 UTC m=+239.304696058" Feb 18 16:33:39 crc kubenswrapper[4812]: I0218 16:33:39.046868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b9d765846-whpw8" Feb 18 16:33:39 crc kubenswrapper[4812]: I0218 16:33:39.992779 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76c8884dd6-cldtp" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.448371 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450235 4812 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450517 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450682 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816" gracePeriod=15 Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450802 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d" gracePeriod=15 Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450853 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a" gracePeriod=15 Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450955 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487" gracePeriod=15 Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.450965 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9" gracePeriod=15 Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453277 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453507 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453523 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453538 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453548 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453563 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453572 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453582 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453590 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453610 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453627 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453638 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.453649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453657 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453804 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453824 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453836 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453845 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453860 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.453948 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.454126 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.454137 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.454359 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.491974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492034 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492084 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492156 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.492241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.524867 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.531657 4812 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593726 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593730 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593770 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593837 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.593983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.594011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.594083 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.594081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.594089 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: I0218 16:33:50.833215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:50 crc kubenswrapper[4812]: E0218 16:33:50.878839 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895646911cc10b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,LastTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.065212 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"03d155ac8ea2b6d8dbec444a238a3ca1d84b5b610b15e7fb32df4c71a20a4585"} Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.068952 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.071664 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.073013 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a" exitCode=0 Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.073044 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d" exitCode=0 Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.073055 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487" exitCode=0 Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.073064 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9" exitCode=2 Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.073134 4812 scope.go:117] "RemoveContainer" containerID="208344558b16fa146e5531b7961315e4cee27ee652619b71f75a9e47c9538576" Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.075690 4812 generic.go:334] "Generic (PLEG): container finished" podID="b6e4c967-d106-4af3-b4d2-813c0ea93021" containerID="1cc78efa063c71320e36ae119b0b2a507b29357e9ccad2c0bc9ec33f9bb728b1" exitCode=0 Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.075779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b6e4c967-d106-4af3-b4d2-813c0ea93021","Type":"ContainerDied","Data":"1cc78efa063c71320e36ae119b0b2a507b29357e9ccad2c0bc9ec33f9bb728b1"} Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.077171 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.101282 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.102335 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.102646 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.103159 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.104183 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:51 crc kubenswrapper[4812]: I0218 16:33:51.104291 4812 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.105007 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.306000 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.547425 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895646911cc10b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,LastTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 16:33:51 crc kubenswrapper[4812]: E0218 16:33:51.708624 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.099202 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.103762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b"} Feb 18 16:33:52 crc kubenswrapper[4812]: E0218 16:33:52.106213 4812 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.106219 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:52 crc kubenswrapper[4812]: E0218 16:33:52.510612 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.554356 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.559257 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.727331 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock\") pod \"b6e4c967-d106-4af3-b4d2-813c0ea93021\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.727392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access\") pod \"b6e4c967-d106-4af3-b4d2-813c0ea93021\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.727421 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir\") pod \"b6e4c967-d106-4af3-b4d2-813c0ea93021\" (UID: \"b6e4c967-d106-4af3-b4d2-813c0ea93021\") " Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.727513 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock" (OuterVolumeSpecName: "var-lock") pod "b6e4c967-d106-4af3-b4d2-813c0ea93021" (UID: "b6e4c967-d106-4af3-b4d2-813c0ea93021"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.727677 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b6e4c967-d106-4af3-b4d2-813c0ea93021" (UID: "b6e4c967-d106-4af3-b4d2-813c0ea93021"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.728133 4812 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.728167 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6e4c967-d106-4af3-b4d2-813c0ea93021-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.733373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b6e4c967-d106-4af3-b4d2-813c0ea93021" (UID: "b6e4c967-d106-4af3-b4d2-813c0ea93021"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.832172 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6e4c967-d106-4af3-b4d2-813c0ea93021-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.834994 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.836165 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.836907 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:52 crc kubenswrapper[4812]: I0218 16:33:52.837319 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.033869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.033943 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034154 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034238 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034254 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034768 4812 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034790 4812 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.034822 4812 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.114174 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.114984 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816" exitCode=0 Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.115069 4812 scope.go:117] "RemoveContainer" containerID="35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.115203 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.117004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b6e4c967-d106-4af3-b4d2-813c0ea93021","Type":"ContainerDied","Data":"af4477f0474f29fb5d157f3f0c68bf32441297afe6de02c582cb86adc04c5d7d"} Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.117057 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4477f0474f29fb5d157f3f0c68bf32441297afe6de02c582cb86adc04c5d7d" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.117067 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.118311 4812 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.139502 4812 scope.go:117] "RemoveContainer" containerID="c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.142434 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.142867 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.151772 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.152018 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.161386 4812 scope.go:117] "RemoveContainer" containerID="c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.181932 4812 scope.go:117] "RemoveContainer" containerID="af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.198414 4812 scope.go:117] "RemoveContainer" containerID="c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.217999 4812 scope.go:117] "RemoveContainer" containerID="3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.237791 4812 scope.go:117] "RemoveContainer" containerID="35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.238513 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\": container with ID starting with 35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a not found: ID does not exist" containerID="35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.238557 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a"} err="failed to get container status \"35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\": rpc error: code = NotFound desc = could not find container \"35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a\": container with ID starting with 35c22425b1dbed51830e102204fb27eaacd1bff5488936ccdd07049c813e419a not found: ID does not exist" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.238589 4812 scope.go:117] "RemoveContainer" containerID="c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.238954 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\": container with ID starting with c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d not found: ID does not exist" containerID="c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239040 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d"} err="failed to get container status \"c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\": rpc error: code = NotFound desc = could not find container \"c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d\": container with ID starting with c4bd1545c2ed4bcb87186ee708902e06b02b5536d3b69c736906dda2f23cee6d not found: ID does not exist" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239118 4812 scope.go:117] "RemoveContainer" containerID="c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.239484 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\": container with ID starting with c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487 not found: ID does not exist" containerID="c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239524 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487"} err="failed to get container status \"c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\": rpc error: code = NotFound desc = could not find container \"c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487\": container with ID starting with c24f862eaa4fadde34e37cfed0e2f3fa1ef11b5c7a977ff1f2861f742f206487 not found: ID does not exist" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239545 4812 scope.go:117] "RemoveContainer" containerID="af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.239892 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\": container with ID starting with af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9 not found: ID does not exist" containerID="af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239953 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9"} err="failed to get container status \"af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\": rpc error: code = NotFound desc = could not find container \"af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9\": container with ID starting with af4fd0906d6ae1637192f20b580a11e93df647facd85f4428fb56a4ba857efa9 not found: ID does not exist" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.239973 4812 scope.go:117] "RemoveContainer" containerID="c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.240271 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\": container with ID starting with c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816 not found: ID does not exist" containerID="c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.240306 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816"} err="failed to get container status \"c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\": rpc error: code = NotFound desc = could not find container \"c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816\": container with ID starting with c368c1e1c9f1f4f6e9355a737ce4fa725a99e4c4cb29701a1798975dc1957816 not found: ID does not exist" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.240328 4812 scope.go:117] "RemoveContainer" containerID="3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7" Feb 18 16:33:53 crc kubenswrapper[4812]: E0218 16:33:53.240650 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\": container with ID starting with 3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7 not found: ID does not exist" containerID="3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7" Feb 18 16:33:53 crc kubenswrapper[4812]: I0218 16:33:53.240684 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7"} err="failed to get container status \"3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\": rpc error: code = NotFound desc = could not find container \"3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7\": container with ID starting with 3dbff3aca5ce339254a4bb94628c3ea709e2b62a8edf3538abdb1912f5a60da7 not found: ID does not exist" Feb 18 16:33:54 crc kubenswrapper[4812]: E0218 16:33:54.112571 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Feb 18 16:33:54 crc kubenswrapper[4812]: I0218 16:33:54.522457 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 16:33:54 crc kubenswrapper[4812]: E0218 16:33:54.563199 4812 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" volumeName="registry-storage" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.004751 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerName="oauth-openshift" containerID="cri-o://0d1fc7dc411036a7d1f3d528947f2011f74de8d9cc6f332557830220c4299d75" gracePeriod=15 Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.157738 4812 generic.go:334] "Generic (PLEG): container finished" podID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerID="0d1fc7dc411036a7d1f3d528947f2011f74de8d9cc6f332557830220c4299d75" exitCode=0 Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.157804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" event={"ID":"677e33bb-1571-4051-bbe6-64dfc16f4520","Type":"ContainerDied","Data":"0d1fc7dc411036a7d1f3d528947f2011f74de8d9cc6f332557830220c4299d75"} Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.575714 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.576297 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.576740 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.694541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.694632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.694676 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvq67\" (UniqueName: \"kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.694727 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.694802 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696008 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696435 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696425 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696490 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696531 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696662 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696653 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.696852 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.697509 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.697598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.697665 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login\") pod \"677e33bb-1571-4051-bbe6-64dfc16f4520\" (UID: \"677e33bb-1571-4051-bbe6-64dfc16f4520\") " Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.698344 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.698346 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.698385 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.698410 4812 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.698371 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.702325 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67" (OuterVolumeSpecName: "kube-api-access-hvq67") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "kube-api-access-hvq67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.702783 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.703595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.704734 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.709057 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.709559 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.709729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.709996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.710324 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "677e33bb-1571-4051-bbe6-64dfc16f4520" (UID: "677e33bb-1571-4051-bbe6-64dfc16f4520"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800038 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800090 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800129 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/677e33bb-1571-4051-bbe6-64dfc16f4520-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800147 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800170 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800188 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvq67\" (UniqueName: \"kubernetes.io/projected/677e33bb-1571-4051-bbe6-64dfc16f4520-kube-api-access-hvq67\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800207 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800223 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800237 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800253 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:56 crc kubenswrapper[4812]: I0218 16:33:56.800269 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/677e33bb-1571-4051-bbe6-64dfc16f4520-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.170482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" event={"ID":"677e33bb-1571-4051-bbe6-64dfc16f4520","Type":"ContainerDied","Data":"f396373b4c81d2c2738ffd611b51f3186ab2dc450a2f8315f7abe78d12832094"} Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.170581 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.170601 4812 scope.go:117] "RemoveContainer" containerID="0d1fc7dc411036a7d1f3d528947f2011f74de8d9cc6f332557830220c4299d75" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.171574 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.172218 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.204231 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:57 crc kubenswrapper[4812]: I0218 16:33:57.204885 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:33:57 crc kubenswrapper[4812]: E0218 16:33:57.314476 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="6.4s" Feb 18 16:34:00 crc kubenswrapper[4812]: I0218 16:34:00.515614 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:00 crc kubenswrapper[4812]: I0218 16:34:00.516718 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.549383 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895646911cc10b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,LastTimestamp:2026-02-18 16:33:50.873088177 +0000 UTC m=+251.138699126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.580284 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:34:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:34:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:34:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T16:34:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.580808 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.581581 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.582031 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.582325 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:01 crc kubenswrapper[4812]: E0218 16:34:01.582345 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.508400 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.511187 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.512002 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.527749 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.528150 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:02 crc kubenswrapper[4812]: E0218 16:34:02.528528 4812 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:02 crc kubenswrapper[4812]: I0218 16:34:02.529133 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:02 crc kubenswrapper[4812]: W0218 16:34:02.566599 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-516bcf43203232440fb9cd0127bc5835fa593f91a3f8f9252e30d1013ba3434f WatchSource:0}: Error finding container 516bcf43203232440fb9cd0127bc5835fa593f91a3f8f9252e30d1013ba3434f: Status 404 returned error can't find the container with id 516bcf43203232440fb9cd0127bc5835fa593f91a3f8f9252e30d1013ba3434f Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.219986 4812 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="738a3735760a29088354ab0a3d463390907432cca474eceab90c98514ac86667" exitCode=0 Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.220197 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"738a3735760a29088354ab0a3d463390907432cca474eceab90c98514ac86667"} Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.220377 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"516bcf43203232440fb9cd0127bc5835fa593f91a3f8f9252e30d1013ba3434f"} Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.220806 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.220831 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:03 crc kubenswrapper[4812]: E0218 16:34:03.221455 4812 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.221457 4812 status_manager.go:851] "Failed to get status for pod" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:03 crc kubenswrapper[4812]: I0218 16:34:03.222057 4812 status_manager.go:851] "Failed to get status for pod" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" pod="openshift-authentication/oauth-openshift-558db77b4-pm7xx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pm7xx\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.232321 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"acb4dd155e947b4b2514e9ebefe35faee65903335b155f3e13d42c456d192f06"} Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.232652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18e79ca08a805f4e2e83e3b2e88b7d17e906c3699e66e030aef4f1532d38c1cb"} Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.232670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8165d451c0cb968a95c5329cd9418c0ad4d9607930994155849eec66b6c87b06"} Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.235892 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.235940 4812 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f" exitCode=1 Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.235981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f"} Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.236568 4812 scope.go:117] "RemoveContainer" containerID="66edbde17fd3620aab97d0067e133c0847bdbc65e153a713532f56b01f13e24f" Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.524144 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:04 crc kubenswrapper[4812]: I0218 16:34:04.627737 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.256155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"988cfd94c85dadd7f43efd96624a3978143bfd250d6438dfb4c465296eeecafa"} Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.256277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"20b5997911f254b064808f263f43d1a027cbb384ecef52349a04a5329826d8c4"} Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.256539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.256659 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.256678 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.260015 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 16:34:05 crc kubenswrapper[4812]: I0218 16:34:05.260080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c92f214369807a58787e3a9ea93b65ff3a124fb72b330b65b264d2dd1284f30f"} Feb 18 16:34:07 crc kubenswrapper[4812]: I0218 16:34:07.530561 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:07 crc kubenswrapper[4812]: I0218 16:34:07.531003 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:07 crc kubenswrapper[4812]: I0218 16:34:07.539535 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:10 crc kubenswrapper[4812]: I0218 16:34:10.269538 4812 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:10 crc kubenswrapper[4812]: I0218 16:34:10.302512 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:10 crc kubenswrapper[4812]: I0218 16:34:10.302561 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:10 crc kubenswrapper[4812]: I0218 16:34:10.307794 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:10 crc kubenswrapper[4812]: I0218 16:34:10.529533 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="85051817-3dd1-4246-9c1e-3639eb6e557e" Feb 18 16:34:11 crc kubenswrapper[4812]: I0218 16:34:11.309917 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:11 crc kubenswrapper[4812]: I0218 16:34:11.310361 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:11 crc kubenswrapper[4812]: I0218 16:34:11.315903 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="85051817-3dd1-4246-9c1e-3639eb6e557e" Feb 18 16:34:14 crc kubenswrapper[4812]: I0218 16:34:14.523690 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:14 crc kubenswrapper[4812]: I0218 16:34:14.627440 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:14 crc kubenswrapper[4812]: I0218 16:34:14.628703 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 16:34:14 crc kubenswrapper[4812]: I0218 16:34:14.628895 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 16:34:20 crc kubenswrapper[4812]: I0218 16:34:20.808503 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 16:34:21 crc kubenswrapper[4812]: I0218 16:34:21.466774 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 16:34:21 crc kubenswrapper[4812]: I0218 16:34:21.613091 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 16:34:21 crc kubenswrapper[4812]: I0218 16:34:21.790668 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 16:34:21 crc kubenswrapper[4812]: I0218 16:34:21.794624 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 16:34:21 crc kubenswrapper[4812]: I0218 16:34:21.980673 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 16:34:22 crc kubenswrapper[4812]: I0218 16:34:22.165217 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 16:34:22 crc kubenswrapper[4812]: I0218 16:34:22.867941 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.050438 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.075321 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.123647 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.209555 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.311537 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.581616 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.646182 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.653853 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.733912 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 16:34:23 crc kubenswrapper[4812]: I0218 16:34:23.824517 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.041454 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.093468 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.145280 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.152728 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.171083 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.407916 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.435847 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.611070 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.634946 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.644297 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.685019 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.724419 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.794868 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.894403 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.900231 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 16:34:24 crc kubenswrapper[4812]: I0218 16:34:24.990726 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.012296 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.031317 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.158737 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.299689 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.311850 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.350071 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.514475 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.564794 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.573886 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.573919 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.601312 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.643920 4812 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.726608 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.732301 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.743223 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.893028 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.939199 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 16:34:25 crc kubenswrapper[4812]: I0218 16:34:25.992420 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.000677 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.027445 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.064631 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.114390 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.141321 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.221076 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.256937 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.257770 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.259748 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.296068 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.321371 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.386841 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.446456 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.463215 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.473222 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.486866 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.608226 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.647685 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.761916 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.768265 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.796236 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.864455 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 16:34:26 crc kubenswrapper[4812]: I0218 16:34:26.870956 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.064684 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.066027 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.121368 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.281729 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.313523 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.393906 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.397750 4812 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.413243 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.482789 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.524695 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.625184 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.641010 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.642712 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.719913 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.726791 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.790723 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.825637 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.920158 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.939748 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 16:34:27 crc kubenswrapper[4812]: I0218 16:34:27.947654 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.030330 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.052800 4812 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.106752 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.134748 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.222150 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.342628 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.367170 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.444488 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.479078 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.486644 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.534602 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.590141 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.649753 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.715536 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.743731 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.748828 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.774871 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.790773 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.874495 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.874949 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.898789 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 16:34:28 crc kubenswrapper[4812]: I0218 16:34:28.981767 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.004466 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.030409 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.043538 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.084324 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.106703 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.127991 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.177413 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.216523 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.223518 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.234254 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.285326 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.326068 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.373515 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.385989 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.398408 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.576018 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.624618 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.629617 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.710987 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.713739 4812 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.724288 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pm7xx","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.724393 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6bcf78946b-gvx8x"] Feb 18 16:34:29 crc kubenswrapper[4812]: E0218 16:34:29.724751 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerName="oauth-openshift" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.724786 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerName="oauth-openshift" Feb 18 16:34:29 crc kubenswrapper[4812]: E0218 16:34:29.724813 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" containerName="installer" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.724828 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" containerName="installer" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.725029 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e4c967-d106-4af3-b4d2-813c0ea93021" containerName="installer" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.725057 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" containerName="oauth-openshift" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.725832 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.725851 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.726657 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56631bd7-1e79-4a24-ab57-7774b75f8faa" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.733362 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.734073 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.734541 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.735393 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.735660 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.739895 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.740225 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.740791 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.740829 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.740844 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.741572 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.743457 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.745461 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.756221 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.759757 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.764531 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792403 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792446 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792357 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.792332953 podStartE2EDuration="19.792332953s" podCreationTimestamp="2026-02-18 16:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:34:29.791082434 +0000 UTC m=+290.056693353" watchObservedRunningTime="2026-02-18 16:34:29.792332953 +0000 UTC m=+290.057943902" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-policies\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792642 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792723 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792764 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792793 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-dir\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.792842 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2pl\" (UniqueName: \"kubernetes.io/projected/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-kube-api-access-lb2pl\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.797337 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.803977 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.804089 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.864525 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894564 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2pl\" (UniqueName: \"kubernetes.io/projected/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-kube-api-access-lb2pl\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894661 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894821 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894861 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-policies\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.894972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895020 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895130 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895169 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-dir\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895303 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-dir\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.895746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.896001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.896590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-audit-policies\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.897731 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.902190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.902193 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.902709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.903708 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.904067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.904535 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.905308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.906006 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.916386 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2pl\" (UniqueName: \"kubernetes.io/projected/d98e96bd-5a20-41d4-97e9-5bad84ba6b5b-kube-api-access-lb2pl\") pod \"oauth-openshift-6bcf78946b-gvx8x\" (UID: \"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.986244 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 16:34:29 crc kubenswrapper[4812]: I0218 16:34:29.986258 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.033177 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.054657 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.061698 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.151842 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.183278 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.206328 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.211164 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.271401 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.274900 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.479213 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.519841 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.526852 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677e33bb-1571-4051-bbe6-64dfc16f4520" path="/var/lib/kubelet/pods/677e33bb-1571-4051-bbe6-64dfc16f4520/volumes" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.575590 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.580041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bcf78946b-gvx8x"] Feb 18 16:34:30 crc kubenswrapper[4812]: W0218 16:34:30.585296 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd98e96bd_5a20_41d4_97e9_5bad84ba6b5b.slice/crio-c46c6208670728e07fe7e2b2915dab13a3613b3c7b580c9015353e4d128df4ec WatchSource:0}: Error finding container c46c6208670728e07fe7e2b2915dab13a3613b3c7b580c9015353e4d128df4ec: Status 404 returned error can't find the container with id c46c6208670728e07fe7e2b2915dab13a3613b3c7b580c9015353e4d128df4ec Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.668119 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.670148 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.710064 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.753370 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.805968 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 16:34:30 crc kubenswrapper[4812]: I0218 16:34:30.925182 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.057815 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.132809 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.135472 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.172342 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.190212 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.234969 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.332094 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.464950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" event={"ID":"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b","Type":"ContainerStarted","Data":"68a8dc57ddc6a1b521dabd34b7cba6830472cc72c7e0a66718c896c65c770f1d"} Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.465013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" event={"ID":"d98e96bd-5a20-41d4-97e9-5bad84ba6b5b","Type":"ContainerStarted","Data":"c46c6208670728e07fe7e2b2915dab13a3613b3c7b580c9015353e4d128df4ec"} Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.465468 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.470291 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.505158 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bcf78946b-gvx8x" podStartSLOduration=61.505137159 podStartE2EDuration="1m1.505137159s" podCreationTimestamp="2026-02-18 16:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:34:31.486401594 +0000 UTC m=+291.752012503" watchObservedRunningTime="2026-02-18 16:34:31.505137159 +0000 UTC m=+291.770748068" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.574696 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.877656 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.916607 4812 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.945131 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 16:34:31 crc kubenswrapper[4812]: I0218 16:34:31.964584 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.018018 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.022592 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.104739 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.227912 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.281762 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.349051 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.416382 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.425648 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.451917 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.526048 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.595090 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.635009 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.760486 4812 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.760939 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b" gracePeriod=5 Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.786577 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 16:34:32 crc kubenswrapper[4812]: I0218 16:34:32.835319 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.001336 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.165260 4812 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.171585 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.314907 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.398069 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.695901 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.708144 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.726281 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.798306 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.867635 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 16:34:33 crc kubenswrapper[4812]: I0218 16:34:33.915060 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:33.999943 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.108621 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.286469 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.333458 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.350640 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.472441 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.530463 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.546998 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.648844 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.792350 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 16:34:34 crc kubenswrapper[4812]: I0218 16:34:34.885159 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.027623 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.035333 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.060532 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.091815 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.366345 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.387803 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.555610 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.568622 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.569902 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.696747 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.725682 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.734249 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.835862 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.905240 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 16:34:35 crc kubenswrapper[4812]: I0218 16:34:35.976417 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.079882 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.080343 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.122812 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.127260 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.180193 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.561238 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 16:34:36 crc kubenswrapper[4812]: I0218 16:34:36.983202 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.050613 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.457001 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.603796 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.752682 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.904591 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 16:34:37 crc kubenswrapper[4812]: I0218 16:34:37.904693 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046323 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046415 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046475 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046700 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.046821 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.047049 4812 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.047065 4812 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.047074 4812 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.047082 4812 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.058002 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.148266 4812 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.178056 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.192824 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.208424 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.514936 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.523465 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.523514 4812 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b" exitCode=137 Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.523559 4812 scope.go:117] "RemoveContainer" containerID="03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.523585 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.540550 4812 scope.go:117] "RemoveContainer" containerID="03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b" Feb 18 16:34:38 crc kubenswrapper[4812]: E0218 16:34:38.541464 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b\": container with ID starting with 03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b not found: ID does not exist" containerID="03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.541554 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b"} err="failed to get container status \"03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b\": rpc error: code = NotFound desc = could not find container \"03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b\": container with ID starting with 03d6ca531bf5ef81e9e2540c791234728e893631c325e79398b73dbefb2e8d5b not found: ID does not exist" Feb 18 16:34:38 crc kubenswrapper[4812]: I0218 16:34:38.673559 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 16:34:39 crc kubenswrapper[4812]: I0218 16:34:39.050750 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 16:34:40 crc kubenswrapper[4812]: I0218 16:34:40.261527 4812 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 16:35:03 crc kubenswrapper[4812]: I0218 16:35:03.413740 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:35:03 crc kubenswrapper[4812]: I0218 16:35:03.414430 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:35:33 crc kubenswrapper[4812]: I0218 16:35:33.413520 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:35:33 crc kubenswrapper[4812]: I0218 16:35:33.414563 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.347936 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h9h7q"] Feb 18 16:35:51 crc kubenswrapper[4812]: E0218 16:35:51.348995 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.349017 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.349175 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.349690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.369156 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h9h7q"] Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a944488-aad6-41b5-bf48-c6e4c3848ea9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484114 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484162 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckl6t\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-kube-api-access-ckl6t\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-tls\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484222 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a944488-aad6-41b5-bf48-c6e4c3848ea9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484264 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-bound-sa-token\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-certificates\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.484307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-trusted-ca\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.511062 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586013 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckl6t\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-kube-api-access-ckl6t\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-tls\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a944488-aad6-41b5-bf48-c6e4c3848ea9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-bound-sa-token\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-certificates\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.586947 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-trusted-ca\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.587028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a944488-aad6-41b5-bf48-c6e4c3848ea9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.588327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a944488-aad6-41b5-bf48-c6e4c3848ea9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.588803 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-certificates\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.588912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a944488-aad6-41b5-bf48-c6e4c3848ea9-trusted-ca\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.594455 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a944488-aad6-41b5-bf48-c6e4c3848ea9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.594457 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-registry-tls\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.606225 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-bound-sa-token\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.610345 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckl6t\" (UniqueName: \"kubernetes.io/projected/4a944488-aad6-41b5-bf48-c6e4c3848ea9-kube-api-access-ckl6t\") pod \"image-registry-66df7c8f76-h9h7q\" (UID: \"4a944488-aad6-41b5-bf48-c6e4c3848ea9\") " pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.667951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:51 crc kubenswrapper[4812]: I0218 16:35:51.915070 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h9h7q"] Feb 18 16:35:52 crc kubenswrapper[4812]: I0218 16:35:52.020805 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" event={"ID":"4a944488-aad6-41b5-bf48-c6e4c3848ea9","Type":"ContainerStarted","Data":"26ad12e1acce01e6936a5f4e031075c5cb2c0d941e36f0ff39ada6f4430faeaf"} Feb 18 16:35:53 crc kubenswrapper[4812]: I0218 16:35:53.030651 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" event={"ID":"4a944488-aad6-41b5-bf48-c6e4c3848ea9","Type":"ContainerStarted","Data":"9454d3e3461f5994f7fde250ced0bb12203a794cee8b23d81575acf4ae1a5c75"} Feb 18 16:35:53 crc kubenswrapper[4812]: I0218 16:35:53.030875 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:35:53 crc kubenswrapper[4812]: I0218 16:35:53.057352 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" podStartSLOduration=2.057275218 podStartE2EDuration="2.057275218s" podCreationTimestamp="2026-02-18 16:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:35:53.051298835 +0000 UTC m=+373.316909764" watchObservedRunningTime="2026-02-18 16:35:53.057275218 +0000 UTC m=+373.322886157" Feb 18 16:36:03 crc kubenswrapper[4812]: I0218 16:36:03.413967 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:36:03 crc kubenswrapper[4812]: I0218 16:36:03.414458 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:36:03 crc kubenswrapper[4812]: I0218 16:36:03.414533 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:36:03 crc kubenswrapper[4812]: I0218 16:36:03.415451 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:36:03 crc kubenswrapper[4812]: I0218 16:36:03.415521 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e" gracePeriod=600 Feb 18 16:36:04 crc kubenswrapper[4812]: I0218 16:36:04.109329 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e" exitCode=0 Feb 18 16:36:04 crc kubenswrapper[4812]: I0218 16:36:04.109441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e"} Feb 18 16:36:04 crc kubenswrapper[4812]: I0218 16:36:04.109824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb"} Feb 18 16:36:04 crc kubenswrapper[4812]: I0218 16:36:04.109864 4812 scope.go:117] "RemoveContainer" containerID="58cb514e3880a7c9c1870a6fc6bf78ac9809ea97a038a740a0f5ffb39cd37ef3" Feb 18 16:36:11 crc kubenswrapper[4812]: I0218 16:36:11.675774 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-h9h7q" Feb 18 16:36:11 crc kubenswrapper[4812]: I0218 16:36:11.765855 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.941451 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.942816 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86cmv" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="registry-server" containerID="cri-o://ffa1ed499759eac86396c765ade08dd25c97be9a169f7a4538803369792bcec4" gracePeriod=30 Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.953234 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.954020 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4k6w9" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="registry-server" containerID="cri-o://4ad13f0d84028f152cbf9b5f9119076f5f31eb24354cd3f1268a8a4eb783be14" gracePeriod=30 Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.972534 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.972894 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" containerID="cri-o://1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7" gracePeriod=30 Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.984607 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.985241 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-st44b" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="registry-server" containerID="cri-o://c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8" gracePeriod=30 Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.997791 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:36:21 crc kubenswrapper[4812]: I0218 16:36:21.998235 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lvsc5" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" containerID="cri-o://3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" gracePeriod=30 Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.032447 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p5ppf"] Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.033863 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.041052 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p5ppf"] Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.218444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9z8\" (UniqueName: \"kubernetes.io/projected/083e70e9-e72b-4e1b-a398-ebe2c7610368-kube-api-access-hf9z8\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.218574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.218653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: E0218 16:36:22.230006 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b is running failed: container process not found" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 16:36:22 crc kubenswrapper[4812]: E0218 16:36:22.230636 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b is running failed: container process not found" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 16:36:22 crc kubenswrapper[4812]: E0218 16:36:22.231078 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b is running failed: container process not found" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 16:36:22 crc kubenswrapper[4812]: E0218 16:36:22.231146 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-lvsc5" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.262411 4812 generic.go:334] "Generic (PLEG): container finished" podID="3913399c-b196-44e0-a381-0526a310bb4b" containerID="4ad13f0d84028f152cbf9b5f9119076f5f31eb24354cd3f1268a8a4eb783be14" exitCode=0 Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.262508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerDied","Data":"4ad13f0d84028f152cbf9b5f9119076f5f31eb24354cd3f1268a8a4eb783be14"} Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.270151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerDied","Data":"ffa1ed499759eac86396c765ade08dd25c97be9a169f7a4538803369792bcec4"} Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.270076 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd50996-0863-4c12-87b4-3e771a829d07" containerID="ffa1ed499759eac86396c765ade08dd25c97be9a169f7a4538803369792bcec4" exitCode=0 Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.319978 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9z8\" (UniqueName: \"kubernetes.io/projected/083e70e9-e72b-4e1b-a398-ebe2c7610368-kube-api-access-hf9z8\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.320039 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.320132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.323780 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.331792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/083e70e9-e72b-4e1b-a398-ebe2c7610368-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.339507 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9z8\" (UniqueName: \"kubernetes.io/projected/083e70e9-e72b-4e1b-a398-ebe2c7610368-kube-api-access-hf9z8\") pod \"marketplace-operator-79b997595-p5ppf\" (UID: \"083e70e9-e72b-4e1b-a398-ebe2c7610368\") " pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.618748 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.850793 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.933783 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.970433 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.977744 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:36:22 crc kubenswrapper[4812]: I0218 16:36:22.985797 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.023088 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p5ppf"] Feb 18 16:36:23 crc kubenswrapper[4812]: W0218 16:36:23.024478 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083e70e9_e72b_4e1b_a398_ebe2c7610368.slice/crio-e75037fe3c1957ca2b721c0a5a1619d66c6ed959c5dfced080a7fc37a7a8270d WatchSource:0}: Error finding container e75037fe3c1957ca2b721c0a5a1619d66c6ed959c5dfced080a7fc37a7a8270d: Status 404 returned error can't find the container with id e75037fe3c1957ca2b721c0a5a1619d66c6ed959c5dfced080a7fc37a7a8270d Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content\") pod \"18e4e3fe-6d0e-4509-8275-ba450daa2602\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities\") pod \"18e4e3fe-6d0e-4509-8275-ba450daa2602\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h2kd\" (UniqueName: \"kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd\") pod \"18e4e3fe-6d0e-4509-8275-ba450daa2602\" (UID: \"18e4e3fe-6d0e-4509-8275-ba450daa2602\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032840 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dk4b\" (UniqueName: \"kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b\") pod \"3913399c-b196-44e0-a381-0526a310bb4b\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032957 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content\") pod \"3913399c-b196-44e0-a381-0526a310bb4b\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.032977 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities\") pod \"3913399c-b196-44e0-a381-0526a310bb4b\" (UID: \"3913399c-b196-44e0-a381-0526a310bb4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.033961 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities" (OuterVolumeSpecName: "utilities") pod "3913399c-b196-44e0-a381-0526a310bb4b" (UID: "3913399c-b196-44e0-a381-0526a310bb4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.038285 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities" (OuterVolumeSpecName: "utilities") pod "18e4e3fe-6d0e-4509-8275-ba450daa2602" (UID: "18e4e3fe-6d0e-4509-8275-ba450daa2602"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.041840 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd" (OuterVolumeSpecName: "kube-api-access-2h2kd") pod "18e4e3fe-6d0e-4509-8275-ba450daa2602" (UID: "18e4e3fe-6d0e-4509-8275-ba450daa2602"). InnerVolumeSpecName "kube-api-access-2h2kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.042109 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b" (OuterVolumeSpecName: "kube-api-access-4dk4b") pod "3913399c-b196-44e0-a381-0526a310bb4b" (UID: "3913399c-b196-44e0-a381-0526a310bb4b"). InnerVolumeSpecName "kube-api-access-4dk4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.066648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18e4e3fe-6d0e-4509-8275-ba450daa2602" (UID: "18e4e3fe-6d0e-4509-8275-ba450daa2602"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.102940 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3913399c-b196-44e0-a381-0526a310bb4b" (UID: "3913399c-b196-44e0-a381-0526a310bb4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.134362 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics\") pod \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.134681 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2vc\" (UniqueName: \"kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc\") pod \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.134844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content\") pod \"6bd50996-0863-4c12-87b4-3e771a829d07\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.134944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities\") pod \"6bd50996-0863-4c12-87b4-3e771a829d07\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135063 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities\") pod \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135205 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content\") pod \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135314 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmskk\" (UniqueName: \"kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk\") pod \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\" (UID: \"a8351117-bbbe-446f-a319-2bd48f5f6f4b\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135451 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmlq\" (UniqueName: \"kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq\") pod \"6bd50996-0863-4c12-87b4-3e771a829d07\" (UID: \"6bd50996-0863-4c12-87b4-3e771a829d07\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135582 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca\") pod \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\" (UID: \"b00e87af-1e21-4c4b-ae20-9da5de7e8176\") " Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.135857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities" (OuterVolumeSpecName: "utilities") pod "a8351117-bbbe-446f-a319-2bd48f5f6f4b" (UID: "a8351117-bbbe-446f-a319-2bd48f5f6f4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136121 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136193 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3913399c-b196-44e0-a381-0526a310bb4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136265 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136327 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136407 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e4e3fe-6d0e-4509-8275-ba450daa2602-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136474 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h2kd\" (UniqueName: \"kubernetes.io/projected/18e4e3fe-6d0e-4509-8275-ba450daa2602-kube-api-access-2h2kd\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136541 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dk4b\" (UniqueName: \"kubernetes.io/projected/3913399c-b196-44e0-a381-0526a310bb4b-kube-api-access-4dk4b\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136284 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities" (OuterVolumeSpecName: "utilities") pod "6bd50996-0863-4c12-87b4-3e771a829d07" (UID: "6bd50996-0863-4c12-87b4-3e771a829d07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.136828 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b00e87af-1e21-4c4b-ae20-9da5de7e8176" (UID: "b00e87af-1e21-4c4b-ae20-9da5de7e8176"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.139322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq" (OuterVolumeSpecName: "kube-api-access-cdmlq") pod "6bd50996-0863-4c12-87b4-3e771a829d07" (UID: "6bd50996-0863-4c12-87b4-3e771a829d07"). InnerVolumeSpecName "kube-api-access-cdmlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.140033 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk" (OuterVolumeSpecName: "kube-api-access-tmskk") pod "a8351117-bbbe-446f-a319-2bd48f5f6f4b" (UID: "a8351117-bbbe-446f-a319-2bd48f5f6f4b"). InnerVolumeSpecName "kube-api-access-tmskk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.140064 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b00e87af-1e21-4c4b-ae20-9da5de7e8176" (UID: "b00e87af-1e21-4c4b-ae20-9da5de7e8176"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.140946 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc" (OuterVolumeSpecName: "kube-api-access-wz2vc") pod "b00e87af-1e21-4c4b-ae20-9da5de7e8176" (UID: "b00e87af-1e21-4c4b-ae20-9da5de7e8176"). InnerVolumeSpecName "kube-api-access-wz2vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.202015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bd50996-0863-4c12-87b4-3e771a829d07" (UID: "6bd50996-0863-4c12-87b4-3e771a829d07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238255 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238298 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2vc\" (UniqueName: \"kubernetes.io/projected/b00e87af-1e21-4c4b-ae20-9da5de7e8176-kube-api-access-wz2vc\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238311 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238320 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd50996-0863-4c12-87b4-3e771a829d07-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238331 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmskk\" (UniqueName: \"kubernetes.io/projected/a8351117-bbbe-446f-a319-2bd48f5f6f4b-kube-api-access-tmskk\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238339 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmlq\" (UniqueName: \"kubernetes.io/projected/6bd50996-0863-4c12-87b4-3e771a829d07-kube-api-access-cdmlq\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.238349 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b00e87af-1e21-4c4b-ae20-9da5de7e8176-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.278761 4812 generic.go:334] "Generic (PLEG): container finished" podID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerID="c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8" exitCode=0 Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.278881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerDied","Data":"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.278921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-st44b" event={"ID":"18e4e3fe-6d0e-4509-8275-ba450daa2602","Type":"ContainerDied","Data":"04c89ba5a01e405f21d29ccb103948ce48f7809152391ed386f02d1b16700204"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.278959 4812 scope.go:117] "RemoveContainer" containerID="c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.279635 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-st44b" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.285154 4812 generic.go:334] "Generic (PLEG): container finished" podID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerID="1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7" exitCode=0 Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.285392 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.285804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" event={"ID":"b00e87af-1e21-4c4b-ae20-9da5de7e8176","Type":"ContainerDied","Data":"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.285869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsgtb" event={"ID":"b00e87af-1e21-4c4b-ae20-9da5de7e8176","Type":"ContainerDied","Data":"51c1df9c19edb83ea83b98af29550a1af4e0f286db93e3740b5c891f12935ae1"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.290507 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" exitCode=0 Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.290595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerDied","Data":"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.290623 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvsc5" event={"ID":"a8351117-bbbe-446f-a319-2bd48f5f6f4b","Type":"ContainerDied","Data":"8546c634a20e764fbaabfc9eef5d89d7f9b68e48504011823fe5a7f4feb474d3"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.290673 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvsc5" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.293341 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" event={"ID":"083e70e9-e72b-4e1b-a398-ebe2c7610368","Type":"ContainerStarted","Data":"f3b557ed064ccf8fc07802964dd0c314b38144df43c21bd28624fec0b69760bb"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.293372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" event={"ID":"083e70e9-e72b-4e1b-a398-ebe2c7610368","Type":"ContainerStarted","Data":"e75037fe3c1957ca2b721c0a5a1619d66c6ed959c5dfced080a7fc37a7a8270d"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.293608 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.294962 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p5ppf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.295017 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" podUID="083e70e9-e72b-4e1b-a398-ebe2c7610368" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.298658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4k6w9" event={"ID":"3913399c-b196-44e0-a381-0526a310bb4b","Type":"ContainerDied","Data":"1f2d008ae053d4176c63f844c409bea8a31edc27c99aedaa8231738b7b7edc97"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.298768 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4k6w9" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.301681 4812 scope.go:117] "RemoveContainer" containerID="5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.303748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86cmv" event={"ID":"6bd50996-0863-4c12-87b4-3e771a829d07","Type":"ContainerDied","Data":"365352f96f2d1b2f38ad7dff007503ea3a4ae433f56133e0c1e52ddb812a7734"} Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.303865 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86cmv" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.318617 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8351117-bbbe-446f-a319-2bd48f5f6f4b" (UID: "a8351117-bbbe-446f-a319-2bd48f5f6f4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.323938 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" podStartSLOduration=2.323912004 podStartE2EDuration="2.323912004s" podCreationTimestamp="2026-02-18 16:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:36:23.316862693 +0000 UTC m=+403.582473612" watchObservedRunningTime="2026-02-18 16:36:23.323912004 +0000 UTC m=+403.589522923" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.331885 4812 scope.go:117] "RemoveContainer" containerID="92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.339826 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8351117-bbbe-446f-a319-2bd48f5f6f4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.350706 4812 scope.go:117] "RemoveContainer" containerID="c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.351313 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8\": container with ID starting with c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8 not found: ID does not exist" containerID="c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.351456 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8"} err="failed to get container status \"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8\": rpc error: code = NotFound desc = could not find container \"c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8\": container with ID starting with c2b897742b435e74100c7aedb3d96d6e779345bbc729820c8d71f66ce90e7dc8 not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.351578 4812 scope.go:117] "RemoveContainer" containerID="5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.351908 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe\": container with ID starting with 5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe not found: ID does not exist" containerID="5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.351932 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe"} err="failed to get container status \"5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe\": rpc error: code = NotFound desc = could not find container \"5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe\": container with ID starting with 5bb352a84fabbb5312d236d87b3c4a43068ce1841fff4e48b36857a7da8edefe not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.351975 4812 scope.go:117] "RemoveContainer" containerID="92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.352311 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd\": container with ID starting with 92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd not found: ID does not exist" containerID="92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.352415 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd"} err="failed to get container status \"92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd\": rpc error: code = NotFound desc = could not find container \"92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd\": container with ID starting with 92b3675ee9103828e24a477eb5306eb092146bedc591a4f93acf3acea35e00fd not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.352511 4812 scope.go:117] "RemoveContainer" containerID="1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.366547 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.371205 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-st44b"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.374794 4812 scope.go:117] "RemoveContainer" containerID="1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.379606 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7\": container with ID starting with 1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7 not found: ID does not exist" containerID="1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.379714 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7"} err="failed to get container status \"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7\": rpc error: code = NotFound desc = could not find container \"1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7\": container with ID starting with 1f3ab1882bfa2bb5cc50137ee86c3c7eebfb7a38831b5e3ecccbb2eff5bb92c7 not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.379755 4812 scope.go:117] "RemoveContainer" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.388837 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.393531 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4k6w9"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.410000 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.413714 4812 scope.go:117] "RemoveContainer" containerID="d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.420684 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsgtb"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.426316 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.431053 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86cmv"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.435711 4812 scope.go:117] "RemoveContainer" containerID="5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.450535 4812 scope.go:117] "RemoveContainer" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.450930 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b\": container with ID starting with 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b not found: ID does not exist" containerID="3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.450962 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b"} err="failed to get container status \"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b\": rpc error: code = NotFound desc = could not find container \"3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b\": container with ID starting with 3e26667b3a3c386607693711673f514ad0d2eb3efae9fb0f0be821e9ec28f82b not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.450991 4812 scope.go:117] "RemoveContainer" containerID="d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.451672 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c\": container with ID starting with d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c not found: ID does not exist" containerID="d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.451703 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c"} err="failed to get container status \"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c\": rpc error: code = NotFound desc = could not find container \"d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c\": container with ID starting with d5663d58bd9ca0e96907e67057f7498ceb72fb58c63cae6be63298743716a56c not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.451716 4812 scope.go:117] "RemoveContainer" containerID="5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe" Feb 18 16:36:23 crc kubenswrapper[4812]: E0218 16:36:23.452277 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe\": container with ID starting with 5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe not found: ID does not exist" containerID="5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.452310 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe"} err="failed to get container status \"5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe\": rpc error: code = NotFound desc = could not find container \"5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe\": container with ID starting with 5aff45c75d026ad1b71934c756519829949cc7181896c1406994f01fd0f284fe not found: ID does not exist" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.452330 4812 scope.go:117] "RemoveContainer" containerID="4ad13f0d84028f152cbf9b5f9119076f5f31eb24354cd3f1268a8a4eb783be14" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.466300 4812 scope.go:117] "RemoveContainer" containerID="4fb78d09a7273e6dcc8765c0b389f54d7acfb4e4f506224189a9008c765ca127" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.481831 4812 scope.go:117] "RemoveContainer" containerID="99a7f65043360dce38b5bbcf09e7814693441a44ac72c01ed0a8137f9b53f1a1" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.501342 4812 scope.go:117] "RemoveContainer" containerID="ffa1ed499759eac86396c765ade08dd25c97be9a169f7a4538803369792bcec4" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.521266 4812 scope.go:117] "RemoveContainer" containerID="b21c08e4e86042f4e6335ef26520b0de980aa233d50c899c11e90fd9c090756c" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.538486 4812 scope.go:117] "RemoveContainer" containerID="7ba9d4114aba35b68c3e72edcfd02cc96452be5349b7e7e2ef9b0f06c3286f19" Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.626962 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:36:23 crc kubenswrapper[4812]: I0218 16:36:23.637271 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lvsc5"] Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022537 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88k9b"] Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022841 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022860 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022871 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022880 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022896 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022904 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022917 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022926 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022941 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022949 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022963 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022971 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.022987 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.022995 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023008 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023016 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023027 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023038 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023050 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023058 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023066 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023074 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="extract-utilities" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023090 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023121 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: E0218 16:36:24.023135 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023143 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="extract-content" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023281 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023299 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" containerName="marketplace-operator" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023312 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023323 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.023334 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3913399c-b196-44e0-a381-0526a310bb4b" containerName="registry-server" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.024438 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.027722 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.039122 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88k9b"] Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.152783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcs7\" (UniqueName: \"kubernetes.io/projected/945dcf1c-04b0-4c76-9261-19d57706f47e-kube-api-access-kpcs7\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.152867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-utilities\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.152928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-catalog-content\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.254755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-catalog-content\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.254828 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpcs7\" (UniqueName: \"kubernetes.io/projected/945dcf1c-04b0-4c76-9261-19d57706f47e-kube-api-access-kpcs7\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.254896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-utilities\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.255499 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-utilities\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.255581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945dcf1c-04b0-4c76-9261-19d57706f47e-catalog-content\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.276780 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpcs7\" (UniqueName: \"kubernetes.io/projected/945dcf1c-04b0-4c76-9261-19d57706f47e-kube-api-access-kpcs7\") pod \"certified-operators-88k9b\" (UID: \"945dcf1c-04b0-4c76-9261-19d57706f47e\") " pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.318846 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p5ppf" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.345164 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.522672 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e4e3fe-6d0e-4509-8275-ba450daa2602" path="/var/lib/kubelet/pods/18e4e3fe-6d0e-4509-8275-ba450daa2602/volumes" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.523317 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3913399c-b196-44e0-a381-0526a310bb4b" path="/var/lib/kubelet/pods/3913399c-b196-44e0-a381-0526a310bb4b/volumes" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.523922 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd50996-0863-4c12-87b4-3e771a829d07" path="/var/lib/kubelet/pods/6bd50996-0863-4c12-87b4-3e771a829d07/volumes" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.524998 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8351117-bbbe-446f-a319-2bd48f5f6f4b" path="/var/lib/kubelet/pods/a8351117-bbbe-446f-a319-2bd48f5f6f4b/volumes" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.525627 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b00e87af-1e21-4c4b-ae20-9da5de7e8176" path="/var/lib/kubelet/pods/b00e87af-1e21-4c4b-ae20-9da5de7e8176/volumes" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.559025 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88k9b"] Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.624553 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g67qg"] Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.625962 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.630486 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.635302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g67qg"] Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.764189 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv9q5\" (UniqueName: \"kubernetes.io/projected/bb36f508-805c-42ff-94ce-25f8739f66ed-kube-api-access-pv9q5\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.764388 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-catalog-content\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.764425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-utilities\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.866196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-catalog-content\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.866266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-utilities\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.866328 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv9q5\" (UniqueName: \"kubernetes.io/projected/bb36f508-805c-42ff-94ce-25f8739f66ed-kube-api-access-pv9q5\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.867379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-catalog-content\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.867715 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb36f508-805c-42ff-94ce-25f8739f66ed-utilities\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.892914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv9q5\" (UniqueName: \"kubernetes.io/projected/bb36f508-805c-42ff-94ce-25f8739f66ed-kube-api-access-pv9q5\") pod \"redhat-marketplace-g67qg\" (UID: \"bb36f508-805c-42ff-94ce-25f8739f66ed\") " pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:24 crc kubenswrapper[4812]: I0218 16:36:24.974762 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:25 crc kubenswrapper[4812]: I0218 16:36:25.191800 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g67qg"] Feb 18 16:36:25 crc kubenswrapper[4812]: I0218 16:36:25.326517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g67qg" event={"ID":"bb36f508-805c-42ff-94ce-25f8739f66ed","Type":"ContainerStarted","Data":"c39016d10a9d785c87db2037f688b0f1b0d909dc3df401846f2dd2016827860a"} Feb 18 16:36:25 crc kubenswrapper[4812]: I0218 16:36:25.329528 4812 generic.go:334] "Generic (PLEG): container finished" podID="945dcf1c-04b0-4c76-9261-19d57706f47e" containerID="f6b196e166028399cf4f42ae317935018f52068848b05b3537245b9a72483271" exitCode=0 Feb 18 16:36:25 crc kubenswrapper[4812]: I0218 16:36:25.331428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88k9b" event={"ID":"945dcf1c-04b0-4c76-9261-19d57706f47e","Type":"ContainerDied","Data":"f6b196e166028399cf4f42ae317935018f52068848b05b3537245b9a72483271"} Feb 18 16:36:25 crc kubenswrapper[4812]: I0218 16:36:25.331485 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88k9b" event={"ID":"945dcf1c-04b0-4c76-9261-19d57706f47e","Type":"ContainerStarted","Data":"e05ef9e28b5a5fe75bf8115e2cc7dc6ec466682cd3d9bf8df4669baa439a0a18"} Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.341706 4812 generic.go:334] "Generic (PLEG): container finished" podID="bb36f508-805c-42ff-94ce-25f8739f66ed" containerID="46da9d8223ad0c0be51f97833202041ee3caef216beedd0a69d6f32a1e103edc" exitCode=0 Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.341763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g67qg" event={"ID":"bb36f508-805c-42ff-94ce-25f8739f66ed","Type":"ContainerDied","Data":"46da9d8223ad0c0be51f97833202041ee3caef216beedd0a69d6f32a1e103edc"} Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.424637 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pk4mz"] Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.426032 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.429466 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.438784 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk4mz"] Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.603169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xg9\" (UniqueName: \"kubernetes.io/projected/05cfe267-0637-43ce-8c4b-393fe990136d-kube-api-access-b2xg9\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.603233 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-utilities\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.603255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-catalog-content\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.704853 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-catalog-content\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.704979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xg9\" (UniqueName: \"kubernetes.io/projected/05cfe267-0637-43ce-8c4b-393fe990136d-kube-api-access-b2xg9\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.705007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-utilities\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.705557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-catalog-content\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.705581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cfe267-0637-43ce-8c4b-393fe990136d-utilities\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.728668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xg9\" (UniqueName: \"kubernetes.io/projected/05cfe267-0637-43ce-8c4b-393fe990136d-kube-api-access-b2xg9\") pod \"redhat-operators-pk4mz\" (UID: \"05cfe267-0637-43ce-8c4b-393fe990136d\") " pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.752543 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:26 crc kubenswrapper[4812]: I0218 16:36:26.954663 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk4mz"] Feb 18 16:36:26 crc kubenswrapper[4812]: W0218 16:36:26.963984 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05cfe267_0637_43ce_8c4b_393fe990136d.slice/crio-ee5b7cc48ff7fd2404a96269eee35249082a7191f0a0ca20213183baa6ed9070 WatchSource:0}: Error finding container ee5b7cc48ff7fd2404a96269eee35249082a7191f0a0ca20213183baa6ed9070: Status 404 returned error can't find the container with id ee5b7cc48ff7fd2404a96269eee35249082a7191f0a0ca20213183baa6ed9070 Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.021809 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lmj92"] Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.025741 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.029457 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.037854 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmj92"] Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.213923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-utilities\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.214070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h252\" (UniqueName: \"kubernetes.io/projected/170cd4cd-fb98-45b4-a075-3ded1e2fb964-kube-api-access-6h252\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.214194 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-catalog-content\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.315418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h252\" (UniqueName: \"kubernetes.io/projected/170cd4cd-fb98-45b4-a075-3ded1e2fb964-kube-api-access-6h252\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.315563 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-catalog-content\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.315591 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-utilities\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.316221 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-utilities\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.316314 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170cd4cd-fb98-45b4-a075-3ded1e2fb964-catalog-content\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.338627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h252\" (UniqueName: \"kubernetes.io/projected/170cd4cd-fb98-45b4-a075-3ded1e2fb964-kube-api-access-6h252\") pod \"community-operators-lmj92\" (UID: \"170cd4cd-fb98-45b4-a075-3ded1e2fb964\") " pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.345207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.349844 4812 generic.go:334] "Generic (PLEG): container finished" podID="05cfe267-0637-43ce-8c4b-393fe990136d" containerID="bf8e37c4b2498a78341030dda8d724842a1910bd49f390233cec40a70d1a5aaa" exitCode=0 Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.349905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk4mz" event={"ID":"05cfe267-0637-43ce-8c4b-393fe990136d","Type":"ContainerDied","Data":"bf8e37c4b2498a78341030dda8d724842a1910bd49f390233cec40a70d1a5aaa"} Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.349931 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk4mz" event={"ID":"05cfe267-0637-43ce-8c4b-393fe990136d","Type":"ContainerStarted","Data":"ee5b7cc48ff7fd2404a96269eee35249082a7191f0a0ca20213183baa6ed9070"} Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.353675 4812 generic.go:334] "Generic (PLEG): container finished" podID="945dcf1c-04b0-4c76-9261-19d57706f47e" containerID="75675bf16615589f61a3874d315321de0a7998324afd4ed141058bdd2eedc7c7" exitCode=0 Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.353721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88k9b" event={"ID":"945dcf1c-04b0-4c76-9261-19d57706f47e","Type":"ContainerDied","Data":"75675bf16615589f61a3874d315321de0a7998324afd4ed141058bdd2eedc7c7"} Feb 18 16:36:27 crc kubenswrapper[4812]: I0218 16:36:27.560285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lmj92"] Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.364936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88k9b" event={"ID":"945dcf1c-04b0-4c76-9261-19d57706f47e","Type":"ContainerStarted","Data":"520a675f60bff9b4e47833859648a62ced4038637a88c344960d061c113e0e6a"} Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.366575 4812 generic.go:334] "Generic (PLEG): container finished" podID="170cd4cd-fb98-45b4-a075-3ded1e2fb964" containerID="02cce52c9af0c8a6f642a9e1cf0d9cbcf096786e9e4211509066aa025bbe7449" exitCode=0 Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.366719 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmj92" event={"ID":"170cd4cd-fb98-45b4-a075-3ded1e2fb964","Type":"ContainerDied","Data":"02cce52c9af0c8a6f642a9e1cf0d9cbcf096786e9e4211509066aa025bbe7449"} Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.366742 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmj92" event={"ID":"170cd4cd-fb98-45b4-a075-3ded1e2fb964","Type":"ContainerStarted","Data":"01dbae9e9c6668c9952ddd62ed3b80a60bdcb51f7ffad860904c2295ea450ce2"} Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.368748 4812 generic.go:334] "Generic (PLEG): container finished" podID="bb36f508-805c-42ff-94ce-25f8739f66ed" containerID="c80b413c880cfbbcf582580c07c41ff89b8e04e10b1bf1f1aa923e7beeeb637a" exitCode=0 Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.368773 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g67qg" event={"ID":"bb36f508-805c-42ff-94ce-25f8739f66ed","Type":"ContainerDied","Data":"c80b413c880cfbbcf582580c07c41ff89b8e04e10b1bf1f1aa923e7beeeb637a"} Feb 18 16:36:28 crc kubenswrapper[4812]: I0218 16:36:28.393961 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88k9b" podStartSLOduration=1.940184211 podStartE2EDuration="4.393930226s" podCreationTimestamp="2026-02-18 16:36:24 +0000 UTC" firstStartedPulling="2026-02-18 16:36:25.331714942 +0000 UTC m=+405.597325841" lastFinishedPulling="2026-02-18 16:36:27.785460947 +0000 UTC m=+408.051071856" observedRunningTime="2026-02-18 16:36:28.388301964 +0000 UTC m=+408.653912873" watchObservedRunningTime="2026-02-18 16:36:28.393930226 +0000 UTC m=+408.659541135" Feb 18 16:36:29 crc kubenswrapper[4812]: I0218 16:36:29.379207 4812 generic.go:334] "Generic (PLEG): container finished" podID="05cfe267-0637-43ce-8c4b-393fe990136d" containerID="1a6f50c2acb23ed0249452a48d3c1d72e2073f66859f0ea2dad6642f91284b71" exitCode=0 Feb 18 16:36:29 crc kubenswrapper[4812]: I0218 16:36:29.379341 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk4mz" event={"ID":"05cfe267-0637-43ce-8c4b-393fe990136d","Type":"ContainerDied","Data":"1a6f50c2acb23ed0249452a48d3c1d72e2073f66859f0ea2dad6642f91284b71"} Feb 18 16:36:29 crc kubenswrapper[4812]: I0218 16:36:29.389413 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g67qg" event={"ID":"bb36f508-805c-42ff-94ce-25f8739f66ed","Type":"ContainerStarted","Data":"4cddb64373afb7d2c8f470a7ebd60e609fd87a2f808771c02d06c44e70c0e982"} Feb 18 16:36:29 crc kubenswrapper[4812]: I0218 16:36:29.421336 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g67qg" podStartSLOduration=2.73143732 podStartE2EDuration="5.421316381s" podCreationTimestamp="2026-02-18 16:36:24 +0000 UTC" firstStartedPulling="2026-02-18 16:36:26.344804709 +0000 UTC m=+406.610415618" lastFinishedPulling="2026-02-18 16:36:29.03468378 +0000 UTC m=+409.300294679" observedRunningTime="2026-02-18 16:36:29.419651736 +0000 UTC m=+409.685262655" watchObservedRunningTime="2026-02-18 16:36:29.421316381 +0000 UTC m=+409.686927290" Feb 18 16:36:30 crc kubenswrapper[4812]: I0218 16:36:30.398641 4812 generic.go:334] "Generic (PLEG): container finished" podID="170cd4cd-fb98-45b4-a075-3ded1e2fb964" containerID="8511690d49e3d5fb777534a96da3269cf71eb5a9a77fd59662aabf94089ebada" exitCode=0 Feb 18 16:36:30 crc kubenswrapper[4812]: I0218 16:36:30.398736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmj92" event={"ID":"170cd4cd-fb98-45b4-a075-3ded1e2fb964","Type":"ContainerDied","Data":"8511690d49e3d5fb777534a96da3269cf71eb5a9a77fd59662aabf94089ebada"} Feb 18 16:36:31 crc kubenswrapper[4812]: I0218 16:36:31.409187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk4mz" event={"ID":"05cfe267-0637-43ce-8c4b-393fe990136d","Type":"ContainerStarted","Data":"affad56e09e79c34faf7e0f07e4d17fb1588979d39ea133ce1e1bbd8c0060fed"} Feb 18 16:36:31 crc kubenswrapper[4812]: I0218 16:36:31.412278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lmj92" event={"ID":"170cd4cd-fb98-45b4-a075-3ded1e2fb964","Type":"ContainerStarted","Data":"0926c080c1a14ac77c4a281f021ff31e2b5a8e590df14ce34fb2cac9e0d44a39"} Feb 18 16:36:31 crc kubenswrapper[4812]: I0218 16:36:31.429795 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pk4mz" podStartSLOduration=2.051501973 podStartE2EDuration="5.429765906s" podCreationTimestamp="2026-02-18 16:36:26 +0000 UTC" firstStartedPulling="2026-02-18 16:36:27.366269314 +0000 UTC m=+407.631880223" lastFinishedPulling="2026-02-18 16:36:30.744533247 +0000 UTC m=+411.010144156" observedRunningTime="2026-02-18 16:36:31.426869547 +0000 UTC m=+411.692480476" watchObservedRunningTime="2026-02-18 16:36:31.429765906 +0000 UTC m=+411.695376815" Feb 18 16:36:31 crc kubenswrapper[4812]: I0218 16:36:31.450544 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lmj92" podStartSLOduration=1.6972431810000002 podStartE2EDuration="4.450522958s" podCreationTimestamp="2026-02-18 16:36:27 +0000 UTC" firstStartedPulling="2026-02-18 16:36:28.370702977 +0000 UTC m=+408.636313886" lastFinishedPulling="2026-02-18 16:36:31.123982754 +0000 UTC m=+411.389593663" observedRunningTime="2026-02-18 16:36:31.446047707 +0000 UTC m=+411.711658616" watchObservedRunningTime="2026-02-18 16:36:31.450522958 +0000 UTC m=+411.716133867" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.345474 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.346511 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.399459 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.476500 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88k9b" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.975520 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:34 crc kubenswrapper[4812]: I0218 16:36:34.975615 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:35 crc kubenswrapper[4812]: I0218 16:36:35.046526 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:35 crc kubenswrapper[4812]: I0218 16:36:35.513807 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g67qg" Feb 18 16:36:36 crc kubenswrapper[4812]: I0218 16:36:36.752984 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:36 crc kubenswrapper[4812]: I0218 16:36:36.753765 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:36 crc kubenswrapper[4812]: I0218 16:36:36.823132 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" podUID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" containerName="registry" containerID="cri-o://927225b8dac62cf92eb7085bdadfc63e7ecbb7eead023cdbd1f2bb81c28111af" gracePeriod=30 Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.346354 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.346886 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.393977 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.471219 4812 generic.go:334] "Generic (PLEG): container finished" podID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" containerID="927225b8dac62cf92eb7085bdadfc63e7ecbb7eead023cdbd1f2bb81c28111af" exitCode=0 Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.471406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" event={"ID":"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f","Type":"ContainerDied","Data":"927225b8dac62cf92eb7085bdadfc63e7ecbb7eead023cdbd1f2bb81c28111af"} Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.534988 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lmj92" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.689655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781226 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781279 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781311 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781540 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781573 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbfq4\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781597 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781614 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.781649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls\") pod \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\" (UID: \"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f\") " Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.783584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.784066 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.792776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.795862 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.796380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.796448 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4" (OuterVolumeSpecName: "kube-api-access-mbfq4") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "kube-api-access-mbfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.798379 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.798809 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pk4mz" podUID="05cfe267-0637-43ce-8c4b-393fe990136d" containerName="registry-server" probeResult="failure" output=< Feb 18 16:36:37 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:36:37 crc kubenswrapper[4812]: > Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.811086 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" (UID: "3bd8442f-e0e4-49f0-bacf-91007ac2ad2f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883004 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883052 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883066 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883076 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbfq4\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-kube-api-access-mbfq4\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883111 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883126 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:37 crc kubenswrapper[4812]: I0218 16:36:37.883137 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:36:38 crc kubenswrapper[4812]: I0218 16:36:38.479324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" Feb 18 16:36:38 crc kubenswrapper[4812]: I0218 16:36:38.479308 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sqzbm" event={"ID":"3bd8442f-e0e4-49f0-bacf-91007ac2ad2f","Type":"ContainerDied","Data":"3d21ee9c004d20bbdd0d0ef98a692d93d20d7463fef24cef6a6aff1272b1b966"} Feb 18 16:36:38 crc kubenswrapper[4812]: I0218 16:36:38.479950 4812 scope.go:117] "RemoveContainer" containerID="927225b8dac62cf92eb7085bdadfc63e7ecbb7eead023cdbd1f2bb81c28111af" Feb 18 16:36:38 crc kubenswrapper[4812]: I0218 16:36:38.528013 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:36:38 crc kubenswrapper[4812]: I0218 16:36:38.528372 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sqzbm"] Feb 18 16:36:40 crc kubenswrapper[4812]: I0218 16:36:40.518600 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" path="/var/lib/kubelet/pods/3bd8442f-e0e4-49f0-bacf-91007ac2ad2f/volumes" Feb 18 16:36:46 crc kubenswrapper[4812]: I0218 16:36:46.803150 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:36:46 crc kubenswrapper[4812]: I0218 16:36:46.869899 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pk4mz" Feb 18 16:38:03 crc kubenswrapper[4812]: I0218 16:38:03.414495 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:38:03 crc kubenswrapper[4812]: I0218 16:38:03.415690 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:38:33 crc kubenswrapper[4812]: I0218 16:38:33.413647 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:38:33 crc kubenswrapper[4812]: I0218 16:38:33.414559 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:38:40 crc kubenswrapper[4812]: I0218 16:38:40.774256 4812 scope.go:117] "RemoveContainer" containerID="e78d0429ffe133e90a6e95007b21adaa8f642fcbfbdde8a766089a9fb8feebc7" Feb 18 16:39:03 crc kubenswrapper[4812]: I0218 16:39:03.414498 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:39:03 crc kubenswrapper[4812]: I0218 16:39:03.415414 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:39:03 crc kubenswrapper[4812]: I0218 16:39:03.415488 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:39:03 crc kubenswrapper[4812]: I0218 16:39:03.416558 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:39:03 crc kubenswrapper[4812]: I0218 16:39:03.416665 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb" gracePeriod=600 Feb 18 16:39:04 crc kubenswrapper[4812]: I0218 16:39:04.511601 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb" exitCode=0 Feb 18 16:39:04 crc kubenswrapper[4812]: I0218 16:39:04.516136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb"} Feb 18 16:39:04 crc kubenswrapper[4812]: I0218 16:39:04.516211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60"} Feb 18 16:39:04 crc kubenswrapper[4812]: I0218 16:39:04.516240 4812 scope.go:117] "RemoveContainer" containerID="f4a5eb0aa8d3b16ef31b9f2c9747a7c82a061c3e7f41364426a2ef6b29647a5e" Feb 18 16:39:40 crc kubenswrapper[4812]: I0218 16:39:40.835140 4812 scope.go:117] "RemoveContainer" containerID="f797d60019c86bd79d910e4d6fa5c49fce67349b08fa7ee8c0a4fe236f7bf822" Feb 18 16:39:40 crc kubenswrapper[4812]: I0218 16:39:40.856069 4812 scope.go:117] "RemoveContainer" containerID="966537d4ec686fab14536c4994f21d6e1a04314a79826cceac915cff036ba366" Feb 18 16:41:03 crc kubenswrapper[4812]: I0218 16:41:03.413630 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:41:03 crc kubenswrapper[4812]: I0218 16:41:03.414601 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.067163 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl"] Feb 18 16:41:13 crc kubenswrapper[4812]: E0218 16:41:13.068366 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" containerName="registry" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.068386 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" containerName="registry" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.068536 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd8442f-e0e4-49f0-bacf-91007ac2ad2f" containerName="registry" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.069087 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.071532 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.073256 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h7j7l"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.074345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h7j7l" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.074428 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5q4sd" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.074696 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.077724 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2t2rr" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.079092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.085187 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h7j7l"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.147194 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qln2f"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.148136 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.150008 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wlrp7" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.161390 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qln2f"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.225050 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cpl\" (UniqueName: \"kubernetes.io/projected/08fea773-a7c8-4ba7-94dd-3d28d98dea63-kube-api-access-42cpl\") pod \"cert-manager-webhook-687f57d79b-qln2f\" (UID: \"08fea773-a7c8-4ba7-94dd-3d28d98dea63\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.225117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfzp\" (UniqueName: \"kubernetes.io/projected/35d9ea4b-c563-487f-ab95-bfb14d853e68-kube-api-access-npfzp\") pod \"cert-manager-858654f9db-h7j7l\" (UID: \"35d9ea4b-c563-487f-ab95-bfb14d853e68\") " pod="cert-manager/cert-manager-858654f9db-h7j7l" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.225237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdgc\" (UniqueName: \"kubernetes.io/projected/67a0e83c-d0b4-4eb0-9525-3a4c502073d8-kube-api-access-7kdgc\") pod \"cert-manager-cainjector-cf98fcc89-wk4sl\" (UID: \"67a0e83c-d0b4-4eb0-9525-3a4c502073d8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.327055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdgc\" (UniqueName: \"kubernetes.io/projected/67a0e83c-d0b4-4eb0-9525-3a4c502073d8-kube-api-access-7kdgc\") pod \"cert-manager-cainjector-cf98fcc89-wk4sl\" (UID: \"67a0e83c-d0b4-4eb0-9525-3a4c502073d8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.327184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cpl\" (UniqueName: \"kubernetes.io/projected/08fea773-a7c8-4ba7-94dd-3d28d98dea63-kube-api-access-42cpl\") pod \"cert-manager-webhook-687f57d79b-qln2f\" (UID: \"08fea773-a7c8-4ba7-94dd-3d28d98dea63\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.327207 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfzp\" (UniqueName: \"kubernetes.io/projected/35d9ea4b-c563-487f-ab95-bfb14d853e68-kube-api-access-npfzp\") pod \"cert-manager-858654f9db-h7j7l\" (UID: \"35d9ea4b-c563-487f-ab95-bfb14d853e68\") " pod="cert-manager/cert-manager-858654f9db-h7j7l" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.348061 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdgc\" (UniqueName: \"kubernetes.io/projected/67a0e83c-d0b4-4eb0-9525-3a4c502073d8-kube-api-access-7kdgc\") pod \"cert-manager-cainjector-cf98fcc89-wk4sl\" (UID: \"67a0e83c-d0b4-4eb0-9525-3a4c502073d8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.348719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfzp\" (UniqueName: \"kubernetes.io/projected/35d9ea4b-c563-487f-ab95-bfb14d853e68-kube-api-access-npfzp\") pod \"cert-manager-858654f9db-h7j7l\" (UID: \"35d9ea4b-c563-487f-ab95-bfb14d853e68\") " pod="cert-manager/cert-manager-858654f9db-h7j7l" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.349976 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cpl\" (UniqueName: \"kubernetes.io/projected/08fea773-a7c8-4ba7-94dd-3d28d98dea63-kube-api-access-42cpl\") pod \"cert-manager-webhook-687f57d79b-qln2f\" (UID: \"08fea773-a7c8-4ba7-94dd-3d28d98dea63\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.437946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.446370 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h7j7l" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.463816 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.688833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl"] Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.708677 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.934915 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qln2f"] Feb 18 16:41:13 crc kubenswrapper[4812]: W0218 16:41:13.939278 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fea773_a7c8_4ba7_94dd_3d28d98dea63.slice/crio-d3340766980c43494b4286cb37b03423a22b79f20428f8b7fdd2899471b8c49f WatchSource:0}: Error finding container d3340766980c43494b4286cb37b03423a22b79f20428f8b7fdd2899471b8c49f: Status 404 returned error can't find the container with id d3340766980c43494b4286cb37b03423a22b79f20428f8b7fdd2899471b8c49f Feb 18 16:41:13 crc kubenswrapper[4812]: I0218 16:41:13.947021 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h7j7l"] Feb 18 16:41:13 crc kubenswrapper[4812]: W0218 16:41:13.958869 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d9ea4b_c563_487f_ab95_bfb14d853e68.slice/crio-119d0e96d914e43cb730658c8c240b74ce6db332e15b988b705b2c9da43a54bf WatchSource:0}: Error finding container 119d0e96d914e43cb730658c8c240b74ce6db332e15b988b705b2c9da43a54bf: Status 404 returned error can't find the container with id 119d0e96d914e43cb730658c8c240b74ce6db332e15b988b705b2c9da43a54bf Feb 18 16:41:14 crc kubenswrapper[4812]: I0218 16:41:14.376408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" event={"ID":"67a0e83c-d0b4-4eb0-9525-3a4c502073d8","Type":"ContainerStarted","Data":"b020bbe649a92e0fe003dd04bae435468399f6173fdeacfbf9b80b3b3b643f36"} Feb 18 16:41:14 crc kubenswrapper[4812]: I0218 16:41:14.379121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" event={"ID":"08fea773-a7c8-4ba7-94dd-3d28d98dea63","Type":"ContainerStarted","Data":"d3340766980c43494b4286cb37b03423a22b79f20428f8b7fdd2899471b8c49f"} Feb 18 16:41:14 crc kubenswrapper[4812]: I0218 16:41:14.380629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h7j7l" event={"ID":"35d9ea4b-c563-487f-ab95-bfb14d853e68","Type":"ContainerStarted","Data":"119d0e96d914e43cb730658c8c240b74ce6db332e15b988b705b2c9da43a54bf"} Feb 18 16:41:16 crc kubenswrapper[4812]: I0218 16:41:16.404407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" event={"ID":"67a0e83c-d0b4-4eb0-9525-3a4c502073d8","Type":"ContainerStarted","Data":"27293ad7a318bbfac010f4746d5895e071305cc0c943601353af70484806d56c"} Feb 18 16:41:16 crc kubenswrapper[4812]: I0218 16:41:16.429974 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wk4sl" podStartSLOduration=1.253976254 podStartE2EDuration="3.429948604s" podCreationTimestamp="2026-02-18 16:41:13 +0000 UTC" firstStartedPulling="2026-02-18 16:41:13.708413446 +0000 UTC m=+693.974024355" lastFinishedPulling="2026-02-18 16:41:15.884385796 +0000 UTC m=+696.149996705" observedRunningTime="2026-02-18 16:41:16.4269396 +0000 UTC m=+696.692550509" watchObservedRunningTime="2026-02-18 16:41:16.429948604 +0000 UTC m=+696.695559523" Feb 18 16:41:18 crc kubenswrapper[4812]: I0218 16:41:18.417331 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" event={"ID":"08fea773-a7c8-4ba7-94dd-3d28d98dea63","Type":"ContainerStarted","Data":"b7e2bf786c6a67326b61a311bb05120dd4b0b28babc037249fd0bc50632c8c20"} Feb 18 16:41:18 crc kubenswrapper[4812]: I0218 16:41:18.417466 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:18 crc kubenswrapper[4812]: I0218 16:41:18.419313 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h7j7l" event={"ID":"35d9ea4b-c563-487f-ab95-bfb14d853e68","Type":"ContainerStarted","Data":"f14e9f0cdd3545905297e92ef64c1647fde0e266f60a7db55ee0e1360b154907"} Feb 18 16:41:18 crc kubenswrapper[4812]: I0218 16:41:18.436633 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" podStartSLOduration=1.5886870370000001 podStartE2EDuration="5.436600889s" podCreationTimestamp="2026-02-18 16:41:13 +0000 UTC" firstStartedPulling="2026-02-18 16:41:13.941582347 +0000 UTC m=+694.207193266" lastFinishedPulling="2026-02-18 16:41:17.789496209 +0000 UTC m=+698.055107118" observedRunningTime="2026-02-18 16:41:18.433563514 +0000 UTC m=+698.699174423" watchObservedRunningTime="2026-02-18 16:41:18.436600889 +0000 UTC m=+698.702211798" Feb 18 16:41:18 crc kubenswrapper[4812]: I0218 16:41:18.451711 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h7j7l" podStartSLOduration=1.563622145 podStartE2EDuration="5.451683634s" podCreationTimestamp="2026-02-18 16:41:13 +0000 UTC" firstStartedPulling="2026-02-18 16:41:13.96064246 +0000 UTC m=+694.226253379" lastFinishedPulling="2026-02-18 16:41:17.848703959 +0000 UTC m=+698.114314868" observedRunningTime="2026-02-18 16:41:18.450960286 +0000 UTC m=+698.716571195" watchObservedRunningTime="2026-02-18 16:41:18.451683634 +0000 UTC m=+698.717294543" Feb 18 16:41:23 crc kubenswrapper[4812]: I0218 16:41:23.466875 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qln2f" Feb 18 16:41:33 crc kubenswrapper[4812]: I0218 16:41:33.414211 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:41:33 crc kubenswrapper[4812]: I0218 16:41:33.415186 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.966825 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v49jp"] Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967738 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-controller" containerID="cri-o://97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967819 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967821 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="northd" containerID="cri-o://17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967879 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-node" containerID="cri-o://f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967934 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-acl-logging" containerID="cri-o://f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.967797 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="nbdb" containerID="cri-o://64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" gracePeriod=30 Feb 18 16:41:36 crc kubenswrapper[4812]: I0218 16:41:36.968041 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="sbdb" containerID="cri-o://2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" gracePeriod=30 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.017527 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" containerID="cri-o://9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" gracePeriod=30 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.327470 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/3.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.329938 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovn-acl-logging/0.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.330532 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovn-controller/0.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.331060 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.404685 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r6dsv"] Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.404951 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.404968 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.404982 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.404991 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405002 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405011 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405022 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405028 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405036 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-node" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405043 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-node" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405052 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kubecfg-setup" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405059 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kubecfg-setup" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405069 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405075 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405087 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="nbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405117 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="nbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405125 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="northd" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405130 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="northd" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405140 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="sbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405147 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="sbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405159 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405165 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405175 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-acl-logging" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405181 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-acl-logging" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405291 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-node" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405301 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405309 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405319 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405328 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovn-acl-logging" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405336 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="sbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405343 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="nbdb" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405350 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405358 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405366 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="northd" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.405458 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405465 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405562 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.405571 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerName="ovnkube-controller" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.407456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504790 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504827 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504856 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504928 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504957 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.504989 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505155 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505190 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505234 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrqnv\" (UniqueName: \"kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505290 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505334 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505386 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505452 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505502 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505584 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet\") pod \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\" (UID: \"1c8bd0ec-00c8-4cc8-a689-073a151689d5\") " Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.505988 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506879 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506915 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506948 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.506982 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.507018 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.507056 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.507132 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket" (OuterVolumeSpecName: "log-socket") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.507168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash" (OuterVolumeSpecName: "host-slash") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.507202 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.508434 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log" (OuterVolumeSpecName: "node-log") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.508592 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.508932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.508954 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.513933 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv" (OuterVolumeSpecName: "kube-api-access-xrqnv") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "kube-api-access-xrqnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.514560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.521954 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1c8bd0ec-00c8-4cc8-a689-073a151689d5" (UID: "1c8bd0ec-00c8-4cc8-a689-073a151689d5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.536480 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovnkube-controller/3.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.540535 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovn-acl-logging/0.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.541328 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v49jp_1c8bd0ec-00c8-4cc8-a689-073a151689d5/ovn-controller/0.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546710 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546737 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546746 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546753 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546762 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546770 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" exitCode=0 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546777 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" exitCode=143 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546785 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" exitCode=143 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546836 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546884 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546927 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546940 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546945 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546951 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546956 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546962 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546968 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546974 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546981 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.546999 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547005 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547011 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547017 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547022 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547028 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547033 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547038 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547043 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547048 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547055 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547063 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547069 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547074 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547079 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547085 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547089 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547114 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547121 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547126 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547132 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547139 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" event={"ID":"1c8bd0ec-00c8-4cc8-a689-073a151689d5","Type":"ContainerDied","Data":"a6a2b94809321961d3e59d1ad259af430c3708072ba350402bf698c81b7eed06"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547148 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547154 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547159 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547164 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547169 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547174 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547180 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547186 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547191 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547196 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547212 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.547399 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v49jp" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.552429 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/2.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.552788 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/1.log" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.552823 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf2b75a7-be08-4a51-b100-9a75359bbd18" containerID="c9cc37b9bafc7a9f647bcdd5d7319d73c4ed7efbbbde1b2c61a0de90b6b92e56" exitCode=2 Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.552848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerDied","Data":"c9cc37b9bafc7a9f647bcdd5d7319d73c4ed7efbbbde1b2c61a0de90b6b92e56"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.552868 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0"} Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.553277 4812 scope.go:117] "RemoveContainer" containerID="c9cc37b9bafc7a9f647bcdd5d7319d73c4ed7efbbbde1b2c61a0de90b6b92e56" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.553522 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-prrcg_openshift-multus(cf2b75a7-be08-4a51-b100-9a75359bbd18)\"" pod="openshift-multus/multus-prrcg" podUID="cf2b75a7-be08-4a51-b100-9a75359bbd18" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.576492 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.595181 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v49jp"] Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.598448 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v49jp"] Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.605507 4812 scope.go:117] "RemoveContainer" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.606906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstpm\" (UniqueName: \"kubernetes.io/projected/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-kube-api-access-zstpm\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.606956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovn-node-metrics-cert\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.606986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-bin\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-script-lib\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-ovn\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-netns\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607318 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-log-socket\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-kubelet\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607470 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-slash\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-node-log\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607798 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-systemd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-config\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-systemd-units\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-etc-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.607972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-var-lib-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-env-overrides\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608160 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-netd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608227 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608244 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608257 4812 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608270 4812 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608284 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608298 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608312 4812 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608326 4812 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608338 4812 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608350 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608364 4812 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608380 4812 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608393 4812 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608407 4812 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608420 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608434 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrqnv\" (UniqueName: \"kubernetes.io/projected/1c8bd0ec-00c8-4cc8-a689-073a151689d5-kube-api-access-xrqnv\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608446 4812 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608458 4812 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1c8bd0ec-00c8-4cc8-a689-073a151689d5-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608471 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1c8bd0ec-00c8-4cc8-a689-073a151689d5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.608483 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1c8bd0ec-00c8-4cc8-a689-073a151689d5-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.627822 4812 scope.go:117] "RemoveContainer" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.641344 4812 scope.go:117] "RemoveContainer" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.654753 4812 scope.go:117] "RemoveContainer" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.668364 4812 scope.go:117] "RemoveContainer" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.681692 4812 scope.go:117] "RemoveContainer" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.692201 4812 scope.go:117] "RemoveContainer" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.706048 4812 scope.go:117] "RemoveContainer" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709417 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovn-node-metrics-cert\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-bin\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709497 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-script-lib\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-ovn\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-netns\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-log-socket\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-kubelet\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709624 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-ovn\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709710 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709753 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-kubelet\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-log-socket\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-slash\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-netns\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709853 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-slash\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709901 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-node-log\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-node-log\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-systemd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-run-systemd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.709996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-config\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710073 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-systemd-units\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-etc-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-var-lib-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710331 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-env-overrides\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-netd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstpm\" (UniqueName: \"kubernetes.io/projected/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-kube-api-access-zstpm\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710531 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-script-lib\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710613 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovnkube-config\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-netd\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710711 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-var-lib-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-etc-openvswitch\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-env-overrides\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.710896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-systemd-units\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.711011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.711185 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-host-cni-bin\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.713369 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-ovn-node-metrics-cert\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.720155 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.723605 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.723650 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} err="failed to get container status \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.723677 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.723932 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": container with ID starting with 0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650 not found: ID does not exist" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.723958 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} err="failed to get container status \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": rpc error: code = NotFound desc = could not find container \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": container with ID starting with 0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.723976 4812 scope.go:117] "RemoveContainer" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.724218 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": container with ID starting with 2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de not found: ID does not exist" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724246 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} err="failed to get container status \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": rpc error: code = NotFound desc = could not find container \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": container with ID starting with 2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724259 4812 scope.go:117] "RemoveContainer" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.724455 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": container with ID starting with 64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0 not found: ID does not exist" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724474 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} err="failed to get container status \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": rpc error: code = NotFound desc = could not find container \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": container with ID starting with 64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724486 4812 scope.go:117] "RemoveContainer" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.724676 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": container with ID starting with 17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6 not found: ID does not exist" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724697 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} err="failed to get container status \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": rpc error: code = NotFound desc = could not find container \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": container with ID starting with 17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724711 4812 scope.go:117] "RemoveContainer" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.724900 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": container with ID starting with 72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659 not found: ID does not exist" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724930 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} err="failed to get container status \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": rpc error: code = NotFound desc = could not find container \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": container with ID starting with 72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.724950 4812 scope.go:117] "RemoveContainer" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.725135 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": container with ID starting with f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c not found: ID does not exist" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725154 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} err="failed to get container status \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": rpc error: code = NotFound desc = could not find container \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": container with ID starting with f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725165 4812 scope.go:117] "RemoveContainer" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.725404 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": container with ID starting with f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218 not found: ID does not exist" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725430 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} err="failed to get container status \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": rpc error: code = NotFound desc = could not find container \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": container with ID starting with f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725443 4812 scope.go:117] "RemoveContainer" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.725735 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": container with ID starting with 97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7 not found: ID does not exist" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725779 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} err="failed to get container status \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": rpc error: code = NotFound desc = could not find container \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": container with ID starting with 97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.725814 4812 scope.go:117] "RemoveContainer" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: E0218 16:41:37.726069 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": container with ID starting with b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c not found: ID does not exist" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726109 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} err="failed to get container status \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": rpc error: code = NotFound desc = could not find container \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": container with ID starting with b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726123 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726380 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} err="failed to get container status \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726438 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726719 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} err="failed to get container status \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": rpc error: code = NotFound desc = could not find container \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": container with ID starting with 0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.726745 4812 scope.go:117] "RemoveContainer" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.727784 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} err="failed to get container status \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": rpc error: code = NotFound desc = could not find container \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": container with ID starting with 2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.727808 4812 scope.go:117] "RemoveContainer" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728035 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} err="failed to get container status \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": rpc error: code = NotFound desc = could not find container \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": container with ID starting with 64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728056 4812 scope.go:117] "RemoveContainer" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728284 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} err="failed to get container status \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": rpc error: code = NotFound desc = could not find container \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": container with ID starting with 17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728317 4812 scope.go:117] "RemoveContainer" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728603 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} err="failed to get container status \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": rpc error: code = NotFound desc = could not find container \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": container with ID starting with 72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.728624 4812 scope.go:117] "RemoveContainer" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729217 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} err="failed to get container status \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": rpc error: code = NotFound desc = could not find container \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": container with ID starting with f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729254 4812 scope.go:117] "RemoveContainer" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729494 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} err="failed to get container status \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": rpc error: code = NotFound desc = could not find container \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": container with ID starting with f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729515 4812 scope.go:117] "RemoveContainer" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729731 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} err="failed to get container status \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": rpc error: code = NotFound desc = could not find container \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": container with ID starting with 97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729758 4812 scope.go:117] "RemoveContainer" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729953 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} err="failed to get container status \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": rpc error: code = NotFound desc = could not find container \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": container with ID starting with b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.729971 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730183 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} err="failed to get container status \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730213 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730453 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} err="failed to get container status \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": rpc error: code = NotFound desc = could not find container \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": container with ID starting with 0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730474 4812 scope.go:117] "RemoveContainer" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730717 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} err="failed to get container status \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": rpc error: code = NotFound desc = could not find container \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": container with ID starting with 2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730742 4812 scope.go:117] "RemoveContainer" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.730981 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} err="failed to get container status \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": rpc error: code = NotFound desc = could not find container \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": container with ID starting with 64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731006 4812 scope.go:117] "RemoveContainer" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731213 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} err="failed to get container status \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": rpc error: code = NotFound desc = could not find container \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": container with ID starting with 17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731232 4812 scope.go:117] "RemoveContainer" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731397 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} err="failed to get container status \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": rpc error: code = NotFound desc = could not find container \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": container with ID starting with 72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731415 4812 scope.go:117] "RemoveContainer" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731659 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} err="failed to get container status \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": rpc error: code = NotFound desc = could not find container \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": container with ID starting with f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731682 4812 scope.go:117] "RemoveContainer" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstpm\" (UniqueName: \"kubernetes.io/projected/e56befd1-7a33-4f70-8a30-ee9a21b3b5fa-kube-api-access-zstpm\") pod \"ovnkube-node-r6dsv\" (UID: \"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731915 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} err="failed to get container status \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": rpc error: code = NotFound desc = could not find container \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": container with ID starting with f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.731932 4812 scope.go:117] "RemoveContainer" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732076 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} err="failed to get container status \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": rpc error: code = NotFound desc = could not find container \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": container with ID starting with 97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732089 4812 scope.go:117] "RemoveContainer" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732260 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} err="failed to get container status \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": rpc error: code = NotFound desc = could not find container \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": container with ID starting with b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732272 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732414 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} err="failed to get container status \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732426 4812 scope.go:117] "RemoveContainer" containerID="0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732578 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650"} err="failed to get container status \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": rpc error: code = NotFound desc = could not find container \"0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650\": container with ID starting with 0f4bad42e7fcaf052a19105b95b8498f7aec33a007cc6699e9695019755fa650 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732590 4812 scope.go:117] "RemoveContainer" containerID="2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732767 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de"} err="failed to get container status \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": rpc error: code = NotFound desc = could not find container \"2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de\": container with ID starting with 2c91b6281727f9da6f80adbf3128f3b84ec69c611f7f67c199047feca00503de not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.732781 4812 scope.go:117] "RemoveContainer" containerID="64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733025 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0"} err="failed to get container status \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": rpc error: code = NotFound desc = could not find container \"64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0\": container with ID starting with 64931380cefe5fa0ab988e89162fecb649f0e15aa8e9d56282a35bb4044c5fa0 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733039 4812 scope.go:117] "RemoveContainer" containerID="17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733261 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6"} err="failed to get container status \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": rpc error: code = NotFound desc = could not find container \"17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6\": container with ID starting with 17136e54cdae1b9d169fccc7d8ab6467158518ed168c1bf9d0d676e0a6f50cf6 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733276 4812 scope.go:117] "RemoveContainer" containerID="72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733507 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659"} err="failed to get container status \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": rpc error: code = NotFound desc = could not find container \"72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659\": container with ID starting with 72a144376250c713b3dfae0fcb93cbe19e7d42a1eb24864f96c18e9e0661d659 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733521 4812 scope.go:117] "RemoveContainer" containerID="f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733711 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c"} err="failed to get container status \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": rpc error: code = NotFound desc = could not find container \"f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c\": container with ID starting with f211bc1742dc3f388b54dd8a4dffeffc47864b0d38a308a19881a98ec31bbf3c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733725 4812 scope.go:117] "RemoveContainer" containerID="f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733887 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218"} err="failed to get container status \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": rpc error: code = NotFound desc = could not find container \"f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218\": container with ID starting with f2c580196905d3f2f5f1a4175edbc6da24858423c79920f6dae971ee0d053218 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.733898 4812 scope.go:117] "RemoveContainer" containerID="97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.734048 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7"} err="failed to get container status \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": rpc error: code = NotFound desc = could not find container \"97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7\": container with ID starting with 97ae50112b85a062a9c37a6b03dc66d55629aea6a5d1337cf7987c9fd294aee7 not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.734060 4812 scope.go:117] "RemoveContainer" containerID="b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.734308 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c"} err="failed to get container status \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": rpc error: code = NotFound desc = could not find container \"b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c\": container with ID starting with b939b0a1afe7efb215dd6c5d01a5ef5f795a44f46daf8851ab3f44910d12111c not found: ID does not exist" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.734326 4812 scope.go:117] "RemoveContainer" containerID="9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33" Feb 18 16:41:37 crc kubenswrapper[4812]: I0218 16:41:37.734526 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33"} err="failed to get container status \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": rpc error: code = NotFound desc = could not find container \"9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33\": container with ID starting with 9b7c6c4405f713d7792e4b7a053a74c6aad241f9b30110c9709fc75c355e6a33 not found: ID does not exist" Feb 18 16:41:38 crc kubenswrapper[4812]: I0218 16:41:38.022875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:38 crc kubenswrapper[4812]: W0218 16:41:38.041932 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56befd1_7a33_4f70_8a30_ee9a21b3b5fa.slice/crio-1b23cc252f67ef4e6a7565da0085652434b136683781c6be68abd086052c4e65 WatchSource:0}: Error finding container 1b23cc252f67ef4e6a7565da0085652434b136683781c6be68abd086052c4e65: Status 404 returned error can't find the container with id 1b23cc252f67ef4e6a7565da0085652434b136683781c6be68abd086052c4e65 Feb 18 16:41:38 crc kubenswrapper[4812]: I0218 16:41:38.519066 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8bd0ec-00c8-4cc8-a689-073a151689d5" path="/var/lib/kubelet/pods/1c8bd0ec-00c8-4cc8-a689-073a151689d5/volumes" Feb 18 16:41:38 crc kubenswrapper[4812]: I0218 16:41:38.562592 4812 generic.go:334] "Generic (PLEG): container finished" podID="e56befd1-7a33-4f70-8a30-ee9a21b3b5fa" containerID="136386a95ee8788078db6bd2f356f062ff94c3c8188fae85d2bf726aaa5a0d73" exitCode=0 Feb 18 16:41:38 crc kubenswrapper[4812]: I0218 16:41:38.562694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerDied","Data":"136386a95ee8788078db6bd2f356f062ff94c3c8188fae85d2bf726aaa5a0d73"} Feb 18 16:41:38 crc kubenswrapper[4812]: I0218 16:41:38.562775 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"1b23cc252f67ef4e6a7565da0085652434b136683781c6be68abd086052c4e65"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.577261 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"a722573183c70a06df2d6f7f78e5ee7fceb7dba11db7366036d99e3492c5a689"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.578491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"664d35a76175b4aa3621534e70e3631544f0c8708b83247a2193cd7c3fb725fc"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.578512 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"2849b32672768aaf3feb05a6dd48e6bdb6af7facacc9eda1e3696f7a6538e6a2"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.578526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"8a05393730622413915cadc3b596647216d03d31ed931243e877d4bf01cedc66"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.578538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"5cfac1452f6dcd590120bdfb9453a8266482b158b308fff88aad345953365997"} Feb 18 16:41:39 crc kubenswrapper[4812]: I0218 16:41:39.578550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"26d3fdaf1cf15bfc1eb3c5931d9243a87c40d270946dacff113c5a926b15d181"} Feb 18 16:41:40 crc kubenswrapper[4812]: I0218 16:41:40.909356 4812 scope.go:117] "RemoveContainer" containerID="ee6798ff4bfabc5fbdf83e504022efbb0a38e23d21ccdb676f52d31232436bc0" Feb 18 16:41:41 crc kubenswrapper[4812]: I0218 16:41:41.593311 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/2.log" Feb 18 16:41:42 crc kubenswrapper[4812]: I0218 16:41:42.605149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"749db916ca1fc42421ff44020f2fa9dd0e299476bdf6ad2f4bcf1bb79a90a029"} Feb 18 16:41:44 crc kubenswrapper[4812]: I0218 16:41:44.621664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" event={"ID":"e56befd1-7a33-4f70-8a30-ee9a21b3b5fa","Type":"ContainerStarted","Data":"4275de15de44abd6577f2844b739cdb68ac4ae294aa3cb536fc573342fef73ff"} Feb 18 16:41:44 crc kubenswrapper[4812]: I0218 16:41:44.622171 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:44 crc kubenswrapper[4812]: I0218 16:41:44.622183 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:44 crc kubenswrapper[4812]: I0218 16:41:44.647523 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:44 crc kubenswrapper[4812]: I0218 16:41:44.652053 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" podStartSLOduration=7.652032404 podStartE2EDuration="7.652032404s" podCreationTimestamp="2026-02-18 16:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:41:44.651535072 +0000 UTC m=+724.917145991" watchObservedRunningTime="2026-02-18 16:41:44.652032404 +0000 UTC m=+724.917643313" Feb 18 16:41:45 crc kubenswrapper[4812]: I0218 16:41:45.628894 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:45 crc kubenswrapper[4812]: I0218 16:41:45.659476 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.851078 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p"] Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.853499 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.858410 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.864862 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p"] Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.992495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.992578 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:49 crc kubenswrapper[4812]: I0218 16:41:49.992627 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdczh\" (UniqueName: \"kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.093722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.093791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.093829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdczh\" (UniqueName: \"kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.094440 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.094603 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.125027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdczh\" (UniqueName: \"kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.179815 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.212221 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(b05ad97893e75ef2c51968826937fd54fb8499dda6c308c0264c7c97937495c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.212316 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(b05ad97893e75ef2c51968826937fd54fb8499dda6c308c0264c7c97937495c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.212352 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(b05ad97893e75ef2c51968826937fd54fb8499dda6c308c0264c7c97937495c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.212420 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace(df5ec246-8380-4818-8e51-36ab37833c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace(df5ec246-8380-4818-8e51-36ab37833c23)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(b05ad97893e75ef2c51968826937fd54fb8499dda6c308c0264c7c97937495c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" podUID="df5ec246-8380-4818-8e51-36ab37833c23" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.663142 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: I0218 16:41:50.663717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.691703 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(7c8d5584c4e8c0fe24c5e00f62b27c5e06d22085960ac6816210fcbfde10151c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.691828 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(7c8d5584c4e8c0fe24c5e00f62b27c5e06d22085960ac6816210fcbfde10151c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.691870 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(7c8d5584c4e8c0fe24c5e00f62b27c5e06d22085960ac6816210fcbfde10151c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:41:50 crc kubenswrapper[4812]: E0218 16:41:50.691983 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace(df5ec246-8380-4818-8e51-36ab37833c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace(df5ec246-8380-4818-8e51-36ab37833c23)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_openshift-marketplace_df5ec246-8380-4818-8e51-36ab37833c23_0(7c8d5584c4e8c0fe24c5e00f62b27c5e06d22085960ac6816210fcbfde10151c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" podUID="df5ec246-8380-4818-8e51-36ab37833c23" Feb 18 16:41:52 crc kubenswrapper[4812]: I0218 16:41:52.507947 4812 scope.go:117] "RemoveContainer" containerID="c9cc37b9bafc7a9f647bcdd5d7319d73c4ed7efbbbde1b2c61a0de90b6b92e56" Feb 18 16:41:52 crc kubenswrapper[4812]: I0218 16:41:52.679492 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/2.log" Feb 18 16:41:53 crc kubenswrapper[4812]: I0218 16:41:53.689793 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-prrcg_cf2b75a7-be08-4a51-b100-9a75359bbd18/kube-multus/2.log" Feb 18 16:41:53 crc kubenswrapper[4812]: I0218 16:41:53.689868 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-prrcg" event={"ID":"cf2b75a7-be08-4a51-b100-9a75359bbd18","Type":"ContainerStarted","Data":"c87d1657e8112ff89c85a94900e164b1a1a8bb403c9cf8fd4fc5a362bd7a2c59"} Feb 18 16:42:02 crc kubenswrapper[4812]: I0218 16:42:02.508519 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:42:02 crc kubenswrapper[4812]: I0218 16:42:02.510231 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:42:02 crc kubenswrapper[4812]: I0218 16:42:02.774624 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p"] Feb 18 16:42:02 crc kubenswrapper[4812]: W0218 16:42:02.780374 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf5ec246_8380_4818_8e51_36ab37833c23.slice/crio-fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4 WatchSource:0}: Error finding container fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4: Status 404 returned error can't find the container with id fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4 Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.414010 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.414885 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.414983 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.415990 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.416068 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60" gracePeriod=600 Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.758701 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60" exitCode=0 Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.758780 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60"} Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.758819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3"} Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.758840 4812 scope.go:117] "RemoveContainer" containerID="f551b4e3725c8ec7369e01e7cde29c58b59ecfc6a76d572a4f7827923b390bdb" Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.761237 4812 generic.go:334] "Generic (PLEG): container finished" podID="df5ec246-8380-4818-8e51-36ab37833c23" containerID="d0717cdf60c9d35c4c4ff1e22ea04f8001e727fef1a1c98e6e82db8e924fe66a" exitCode=0 Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.761338 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" event={"ID":"df5ec246-8380-4818-8e51-36ab37833c23","Type":"ContainerDied","Data":"d0717cdf60c9d35c4c4ff1e22ea04f8001e727fef1a1c98e6e82db8e924fe66a"} Feb 18 16:42:03 crc kubenswrapper[4812]: I0218 16:42:03.761379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" event={"ID":"df5ec246-8380-4818-8e51-36ab37833c23","Type":"ContainerStarted","Data":"fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4"} Feb 18 16:42:05 crc kubenswrapper[4812]: I0218 16:42:05.780571 4812 generic.go:334] "Generic (PLEG): container finished" podID="df5ec246-8380-4818-8e51-36ab37833c23" containerID="e073e1a57d8e6e52399e0f3be16d79212e8355db92bb7972a54e8f1a0ce77c1e" exitCode=0 Feb 18 16:42:05 crc kubenswrapper[4812]: I0218 16:42:05.780690 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" event={"ID":"df5ec246-8380-4818-8e51-36ab37833c23","Type":"ContainerDied","Data":"e073e1a57d8e6e52399e0f3be16d79212e8355db92bb7972a54e8f1a0ce77c1e"} Feb 18 16:42:06 crc kubenswrapper[4812]: I0218 16:42:06.791321 4812 generic.go:334] "Generic (PLEG): container finished" podID="df5ec246-8380-4818-8e51-36ab37833c23" containerID="62c291dace10e7f2726a6602a48814146a7d9d8db6ad0a7185a1e9c946fb012a" exitCode=0 Feb 18 16:42:06 crc kubenswrapper[4812]: I0218 16:42:06.791374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" event={"ID":"df5ec246-8380-4818-8e51-36ab37833c23","Type":"ContainerDied","Data":"62c291dace10e7f2726a6602a48814146a7d9d8db6ad0a7185a1e9c946fb012a"} Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.055822 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r6dsv" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.073044 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.186908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle\") pod \"df5ec246-8380-4818-8e51-36ab37833c23\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.186990 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdczh\" (UniqueName: \"kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh\") pod \"df5ec246-8380-4818-8e51-36ab37833c23\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.187125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util\") pod \"df5ec246-8380-4818-8e51-36ab37833c23\" (UID: \"df5ec246-8380-4818-8e51-36ab37833c23\") " Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.191943 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle" (OuterVolumeSpecName: "bundle") pod "df5ec246-8380-4818-8e51-36ab37833c23" (UID: "df5ec246-8380-4818-8e51-36ab37833c23"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.198221 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh" (OuterVolumeSpecName: "kube-api-access-pdczh") pod "df5ec246-8380-4818-8e51-36ab37833c23" (UID: "df5ec246-8380-4818-8e51-36ab37833c23"). InnerVolumeSpecName "kube-api-access-pdczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.209703 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util" (OuterVolumeSpecName: "util") pod "df5ec246-8380-4818-8e51-36ab37833c23" (UID: "df5ec246-8380-4818-8e51-36ab37833c23"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.288281 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-util\") on node \"crc\" DevicePath \"\"" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.288326 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df5ec246-8380-4818-8e51-36ab37833c23-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.288341 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdczh\" (UniqueName: \"kubernetes.io/projected/df5ec246-8380-4818-8e51-36ab37833c23-kube-api-access-pdczh\") on node \"crc\" DevicePath \"\"" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.809380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" event={"ID":"df5ec246-8380-4818-8e51-36ab37833c23","Type":"ContainerDied","Data":"fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4"} Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.809422 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdf35985e4dacf930493978ae05a1cfc7b934d07b9e7040c7d6897de310b81a4" Feb 18 16:42:08 crc kubenswrapper[4812]: I0218 16:42:08.809516 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.698835 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2"] Feb 18 16:42:18 crc kubenswrapper[4812]: E0218 16:42:18.699901 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="util" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.699917 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="util" Feb 18 16:42:18 crc kubenswrapper[4812]: E0218 16:42:18.699942 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="pull" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.699949 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="pull" Feb 18 16:42:18 crc kubenswrapper[4812]: E0218 16:42:18.699960 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="extract" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.699966 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="extract" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.700113 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5ec246-8380-4818-8e51-36ab37833c23" containerName="extract" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.700645 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.703996 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9sl47" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.704054 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.705179 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.709919 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.757226 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqhn\" (UniqueName: \"kubernetes.io/projected/c18e9953-e57b-4c8e-832e-a8a62a1b00d4-kube-api-access-hcqhn\") pod \"obo-prometheus-operator-68bc856cb9-cdpj2\" (UID: \"c18e9953-e57b-4c8e-832e-a8a62a1b00d4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.815637 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.816421 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.818440 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-hjwqg" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.820159 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.832286 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.837722 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.838678 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.858594 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.858889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqhn\" (UniqueName: \"kubernetes.io/projected/c18e9953-e57b-4c8e-832e-a8a62a1b00d4-kube-api-access-hcqhn\") pod \"obo-prometheus-operator-68bc856cb9-cdpj2\" (UID: \"c18e9953-e57b-4c8e-832e-a8a62a1b00d4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.858970 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.859046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.859190 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.868941 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.886882 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqhn\" (UniqueName: \"kubernetes.io/projected/c18e9953-e57b-4c8e-832e-a8a62a1b00d4-kube-api-access-hcqhn\") pod \"obo-prometheus-operator-68bc856cb9-cdpj2\" (UID: \"c18e9953-e57b-4c8e-832e-a8a62a1b00d4\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.952771 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8l5sf"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.953779 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.957146 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.958580 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hmpnc" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.960068 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.960162 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.960192 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.960220 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.967668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.970972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a5d61a2-337d-4f14-ba0f-e1625e17d85b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-p25pz\" (UID: \"3a5d61a2-337d-4f14-ba0f-e1625e17d85b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.971549 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8l5sf"] Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.979271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:18 crc kubenswrapper[4812]: I0218 16:42:18.979308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2063af-e1c3-4d59-8aed-39615ddeab3e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56557b685c-gmbbr\" (UID: \"cf2063af-e1c3-4d59-8aed-39615ddeab3e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.026311 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.073126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.073202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8rp\" (UniqueName: \"kubernetes.io/projected/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-kube-api-access-lp8rp\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.113938 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g467b"] Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.114749 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.117158 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-j8s7q" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.133006 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.146441 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g467b"] Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.171081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.174672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbn68\" (UniqueName: \"kubernetes.io/projected/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-kube-api-access-wbn68\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.174729 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.174754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.174790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8rp\" (UniqueName: \"kubernetes.io/projected/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-kube-api-access-lp8rp\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.181735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.198435 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8rp\" (UniqueName: \"kubernetes.io/projected/38d2ae21-5a2d-42e7-8beb-e03bc7354dbe-kube-api-access-lp8rp\") pod \"observability-operator-59bdc8b94-8l5sf\" (UID: \"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe\") " pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.276740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbn68\" (UniqueName: \"kubernetes.io/projected/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-kube-api-access-wbn68\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.276794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.277644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.303815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbn68\" (UniqueName: \"kubernetes.io/projected/7b7793e3-e91d-4d48-bacc-bdfd155dbc78-kube-api-access-wbn68\") pod \"perses-operator-5bf474d74f-g467b\" (UID: \"7b7793e3-e91d-4d48-bacc-bdfd155dbc78\") " pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.320336 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.353229 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2"] Feb 18 16:42:19 crc kubenswrapper[4812]: W0218 16:42:19.376615 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc18e9953_e57b_4c8e_832e_a8a62a1b00d4.slice/crio-a24c024c155ddfae1b6939e37b41b3f757fbff364ae4a0bd53c811f46e194d2a WatchSource:0}: Error finding container a24c024c155ddfae1b6939e37b41b3f757fbff364ae4a0bd53c811f46e194d2a: Status 404 returned error can't find the container with id a24c024c155ddfae1b6939e37b41b3f757fbff364ae4a0bd53c811f46e194d2a Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.433144 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.510806 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz"] Feb 18 16:42:19 crc kubenswrapper[4812]: W0218 16:42:19.524439 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5d61a2_337d_4f14_ba0f_e1625e17d85b.slice/crio-b1305184daa226fdf8ec697ab808d0013885a60b19d33d610b2ee38ce6b69661 WatchSource:0}: Error finding container b1305184daa226fdf8ec697ab808d0013885a60b19d33d610b2ee38ce6b69661: Status 404 returned error can't find the container with id b1305184daa226fdf8ec697ab808d0013885a60b19d33d610b2ee38ce6b69661 Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.642676 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8l5sf"] Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.702212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr"] Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.842383 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g467b"] Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.880607 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-g467b" event={"ID":"7b7793e3-e91d-4d48-bacc-bdfd155dbc78","Type":"ContainerStarted","Data":"4c23bca55a353d8cdfec7f44dae7b70a1df6a69d387cff6dd6621bbe3fa52775"} Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.888501 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" event={"ID":"cf2063af-e1c3-4d59-8aed-39615ddeab3e","Type":"ContainerStarted","Data":"35b9fe278e8d40052b3b421ebf6641ec5cb291b853bf6f3d2512a480dd905971"} Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.890714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" event={"ID":"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe","Type":"ContainerStarted","Data":"fd62d017f73efe528554fddfac0e1ca5a72beaf10867ef815b33850faa99cfb0"} Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.891874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" event={"ID":"3a5d61a2-337d-4f14-ba0f-e1625e17d85b","Type":"ContainerStarted","Data":"b1305184daa226fdf8ec697ab808d0013885a60b19d33d610b2ee38ce6b69661"} Feb 18 16:42:19 crc kubenswrapper[4812]: I0218 16:42:19.892962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" event={"ID":"c18e9953-e57b-4c8e-832e-a8a62a1b00d4","Type":"ContainerStarted","Data":"a24c024c155ddfae1b6939e37b41b3f757fbff364ae4a0bd53c811f46e194d2a"} Feb 18 16:42:24 crc kubenswrapper[4812]: I0218 16:42:24.697840 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.051753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" event={"ID":"c18e9953-e57b-4c8e-832e-a8a62a1b00d4","Type":"ContainerStarted","Data":"6859a48179d761ba290e26c295a0391fe7bffa442d8789e40d2fe197dbe68446"} Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.054600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-g467b" event={"ID":"7b7793e3-e91d-4d48-bacc-bdfd155dbc78","Type":"ContainerStarted","Data":"2c650475ebe968d17a64c8da4cab66534af59ee05d8c60ce54ec27c5ffbf3e7b"} Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.054733 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.057120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" event={"ID":"cf2063af-e1c3-4d59-8aed-39615ddeab3e","Type":"ContainerStarted","Data":"9ee7590ffb1127a143f568463d7d709a5d2f1cc52c65bdaf5734e85a81f97354"} Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.059213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" event={"ID":"38d2ae21-5a2d-42e7-8beb-e03bc7354dbe","Type":"ContainerStarted","Data":"cbabd69e4b981001390a269c38568dd5a4a0bc7a0f424719ce44080ffd9d7d80"} Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.059454 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.061960 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.063888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" event={"ID":"3a5d61a2-337d-4f14-ba0f-e1625e17d85b","Type":"ContainerStarted","Data":"ae488b90e58d15d47302d0359e23e2bb2878f5b5a4926ab8619e5dbfea53b5a6"} Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.113656 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cdpj2" podStartSLOduration=2.476450625 podStartE2EDuration="16.113639905s" podCreationTimestamp="2026-02-18 16:42:18 +0000 UTC" firstStartedPulling="2026-02-18 16:42:19.381115546 +0000 UTC m=+759.646726455" lastFinishedPulling="2026-02-18 16:42:33.018304826 +0000 UTC m=+773.283915735" observedRunningTime="2026-02-18 16:42:34.083999504 +0000 UTC m=+774.349610413" watchObservedRunningTime="2026-02-18 16:42:34.113639905 +0000 UTC m=+774.379250814" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.114647 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-p25pz" podStartSLOduration=2.6750624800000002 podStartE2EDuration="16.114642s" podCreationTimestamp="2026-02-18 16:42:18 +0000 UTC" firstStartedPulling="2026-02-18 16:42:19.533340866 +0000 UTC m=+759.798951775" lastFinishedPulling="2026-02-18 16:42:32.972920386 +0000 UTC m=+773.238531295" observedRunningTime="2026-02-18 16:42:34.109012231 +0000 UTC m=+774.374623150" watchObservedRunningTime="2026-02-18 16:42:34.114642 +0000 UTC m=+774.380252909" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.135556 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" podStartSLOduration=2.830464875 podStartE2EDuration="16.135523155s" podCreationTimestamp="2026-02-18 16:42:18 +0000 UTC" firstStartedPulling="2026-02-18 16:42:19.667437696 +0000 UTC m=+759.933048605" lastFinishedPulling="2026-02-18 16:42:32.972495976 +0000 UTC m=+773.238106885" observedRunningTime="2026-02-18 16:42:34.128621825 +0000 UTC m=+774.394232744" watchObservedRunningTime="2026-02-18 16:42:34.135523155 +0000 UTC m=+774.401134064" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.153791 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56557b685c-gmbbr" podStartSLOduration=2.893796489 podStartE2EDuration="16.153761155s" podCreationTimestamp="2026-02-18 16:42:18 +0000 UTC" firstStartedPulling="2026-02-18 16:42:19.711795021 +0000 UTC m=+759.977405930" lastFinishedPulling="2026-02-18 16:42:32.971759687 +0000 UTC m=+773.237370596" observedRunningTime="2026-02-18 16:42:34.147823139 +0000 UTC m=+774.413434048" watchObservedRunningTime="2026-02-18 16:42:34.153761155 +0000 UTC m=+774.419372064" Feb 18 16:42:34 crc kubenswrapper[4812]: I0218 16:42:34.184190 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-g467b" podStartSLOduration=2.158028407 podStartE2EDuration="15.184176216s" podCreationTimestamp="2026-02-18 16:42:19 +0000 UTC" firstStartedPulling="2026-02-18 16:42:19.857208977 +0000 UTC m=+760.122819886" lastFinishedPulling="2026-02-18 16:42:32.883356786 +0000 UTC m=+773.148967695" observedRunningTime="2026-02-18 16:42:34.181431958 +0000 UTC m=+774.447042867" watchObservedRunningTime="2026-02-18 16:42:34.184176216 +0000 UTC m=+774.449787125" Feb 18 16:42:39 crc kubenswrapper[4812]: I0218 16:42:39.436744 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-g467b" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.529430 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46"] Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.531788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.534792 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.540042 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46"] Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.718554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cxrk\" (UniqueName: \"kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.718656 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.718711 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.820033 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.820175 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cxrk\" (UniqueName: \"kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.820211 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.820628 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.820797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.853171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cxrk\" (UniqueName: \"kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:55 crc kubenswrapper[4812]: I0218 16:42:55.879271 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:42:56 crc kubenswrapper[4812]: I0218 16:42:56.283952 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46"] Feb 18 16:42:56 crc kubenswrapper[4812]: W0218 16:42:56.291559 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3bb6960_2b83_4e82_86d3_5ada0d7be18a.slice/crio-9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c WatchSource:0}: Error finding container 9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c: Status 404 returned error can't find the container with id 9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.214249 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerID="f999e48e477c7b74a5fb2f444c5760ea5727d9a58e91c0a6477119d8127f8967" exitCode=0 Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.214331 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" event={"ID":"a3bb6960-2b83-4e82-86d3-5ada0d7be18a","Type":"ContainerDied","Data":"f999e48e477c7b74a5fb2f444c5760ea5727d9a58e91c0a6477119d8127f8967"} Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.214361 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" event={"ID":"a3bb6960-2b83-4e82-86d3-5ada0d7be18a","Type":"ContainerStarted","Data":"9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c"} Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.888263 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.893751 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:57 crc kubenswrapper[4812]: I0218 16:42:57.896936 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.062989 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.063155 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.063207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9cv\" (UniqueName: \"kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.163935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.163997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k9cv\" (UniqueName: \"kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.164044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.164483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.164567 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.194296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k9cv\" (UniqueName: \"kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv\") pod \"redhat-operators-5dg7q\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.254310 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:42:58 crc kubenswrapper[4812]: I0218 16:42:58.685042 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:42:58 crc kubenswrapper[4812]: W0218 16:42:58.695621 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575ad334_bbf7_42b7_9268_18ed15f551d0.slice/crio-092094b3a98cc889781dff94250fc1969876968bf00b074e1c471e8acae3dcea WatchSource:0}: Error finding container 092094b3a98cc889781dff94250fc1969876968bf00b074e1c471e8acae3dcea: Status 404 returned error can't find the container with id 092094b3a98cc889781dff94250fc1969876968bf00b074e1c471e8acae3dcea Feb 18 16:42:59 crc kubenswrapper[4812]: I0218 16:42:59.234378 4812 generic.go:334] "Generic (PLEG): container finished" podID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerID="b0a5ff2c4d6c428b9ddfe8500741e76367e651009ff0396f7766014e09910dcf" exitCode=0 Feb 18 16:42:59 crc kubenswrapper[4812]: I0218 16:42:59.234831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerDied","Data":"b0a5ff2c4d6c428b9ddfe8500741e76367e651009ff0396f7766014e09910dcf"} Feb 18 16:42:59 crc kubenswrapper[4812]: I0218 16:42:59.234874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerStarted","Data":"092094b3a98cc889781dff94250fc1969876968bf00b074e1c471e8acae3dcea"} Feb 18 16:42:59 crc kubenswrapper[4812]: I0218 16:42:59.242227 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerID="6e8379f7141b099acedffa5bc8df0fb9b5389c090428492fbef56470c5cf1f60" exitCode=0 Feb 18 16:42:59 crc kubenswrapper[4812]: I0218 16:42:59.242302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" event={"ID":"a3bb6960-2b83-4e82-86d3-5ada0d7be18a","Type":"ContainerDied","Data":"6e8379f7141b099acedffa5bc8df0fb9b5389c090428492fbef56470c5cf1f60"} Feb 18 16:43:00 crc kubenswrapper[4812]: I0218 16:43:00.250587 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerID="41971b5eae530d7b1395fa3822dae310533e9c44a78795e8338fe8122e790c5d" exitCode=0 Feb 18 16:43:00 crc kubenswrapper[4812]: I0218 16:43:00.250678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" event={"ID":"a3bb6960-2b83-4e82-86d3-5ada0d7be18a","Type":"ContainerDied","Data":"41971b5eae530d7b1395fa3822dae310533e9c44a78795e8338fe8122e790c5d"} Feb 18 16:43:00 crc kubenswrapper[4812]: I0218 16:43:00.253000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerStarted","Data":"fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6"} Feb 18 16:43:00 crc kubenswrapper[4812]: E0218 16:43:00.869418 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575ad334_bbf7_42b7_9268_18ed15f551d0.slice/crio-fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575ad334_bbf7_42b7_9268_18ed15f551d0.slice/crio-conmon-fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.263080 4812 generic.go:334] "Generic (PLEG): container finished" podID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerID="fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6" exitCode=0 Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.263275 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerDied","Data":"fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6"} Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.551113 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.718566 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cxrk\" (UniqueName: \"kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk\") pod \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.718634 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle\") pod \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.718779 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util\") pod \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\" (UID: \"a3bb6960-2b83-4e82-86d3-5ada0d7be18a\") " Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.720420 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle" (OuterVolumeSpecName: "bundle") pod "a3bb6960-2b83-4e82-86d3-5ada0d7be18a" (UID: "a3bb6960-2b83-4e82-86d3-5ada0d7be18a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.730604 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk" (OuterVolumeSpecName: "kube-api-access-2cxrk") pod "a3bb6960-2b83-4e82-86d3-5ada0d7be18a" (UID: "a3bb6960-2b83-4e82-86d3-5ada0d7be18a"). InnerVolumeSpecName "kube-api-access-2cxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.820801 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cxrk\" (UniqueName: \"kubernetes.io/projected/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-kube-api-access-2cxrk\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:01 crc kubenswrapper[4812]: I0218 16:43:01.820842 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:02 crc kubenswrapper[4812]: I0218 16:43:02.274927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" event={"ID":"a3bb6960-2b83-4e82-86d3-5ada0d7be18a","Type":"ContainerDied","Data":"9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c"} Feb 18 16:43:02 crc kubenswrapper[4812]: I0218 16:43:02.274974 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d499d4cd6b769b8080a9c92d39f2daacd94a3b5c6510165da514c8a3c50691c" Feb 18 16:43:02 crc kubenswrapper[4812]: I0218 16:43:02.275006 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46" Feb 18 16:43:03 crc kubenswrapper[4812]: I0218 16:43:03.832446 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util" (OuterVolumeSpecName: "util") pod "a3bb6960-2b83-4e82-86d3-5ada0d7be18a" (UID: "a3bb6960-2b83-4e82-86d3-5ada0d7be18a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:03 crc kubenswrapper[4812]: I0218 16:43:03.850683 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3bb6960-2b83-4e82-86d3-5ada0d7be18a-util\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.292314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerStarted","Data":"075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6"} Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.316862 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dg7q" podStartSLOduration=2.6831742629999997 podStartE2EDuration="7.316837725s" podCreationTimestamp="2026-02-18 16:42:57 +0000 UTC" firstStartedPulling="2026-02-18 16:42:59.239059564 +0000 UTC m=+799.504670473" lastFinishedPulling="2026-02-18 16:43:03.872723026 +0000 UTC m=+804.138333935" observedRunningTime="2026-02-18 16:43:04.312504518 +0000 UTC m=+804.578115447" watchObservedRunningTime="2026-02-18 16:43:04.316837725 +0000 UTC m=+804.582448634" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.695330 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wj667"] Feb 18 16:43:04 crc kubenswrapper[4812]: E0218 16:43:04.695619 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="pull" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.695635 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="pull" Feb 18 16:43:04 crc kubenswrapper[4812]: E0218 16:43:04.695647 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="extract" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.695653 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="extract" Feb 18 16:43:04 crc kubenswrapper[4812]: E0218 16:43:04.695670 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="util" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.695678 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="util" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.695776 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bb6960-2b83-4e82-86d3-5ada0d7be18a" containerName="extract" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.696248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.698821 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.699029 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.699205 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-nft58" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.709398 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wj667"] Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.864576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnh5\" (UniqueName: \"kubernetes.io/projected/7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58-kube-api-access-jvnh5\") pod \"nmstate-operator-694c9596b7-wj667\" (UID: \"7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.965489 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnh5\" (UniqueName: \"kubernetes.io/projected/7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58-kube-api-access-jvnh5\") pod \"nmstate-operator-694c9596b7-wj667\" (UID: \"7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" Feb 18 16:43:04 crc kubenswrapper[4812]: I0218 16:43:04.990492 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnh5\" (UniqueName: \"kubernetes.io/projected/7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58-kube-api-access-jvnh5\") pod \"nmstate-operator-694c9596b7-wj667\" (UID: \"7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" Feb 18 16:43:05 crc kubenswrapper[4812]: I0218 16:43:05.012489 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" Feb 18 16:43:05 crc kubenswrapper[4812]: I0218 16:43:05.286801 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wj667"] Feb 18 16:43:06 crc kubenswrapper[4812]: I0218 16:43:06.305446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" event={"ID":"7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58","Type":"ContainerStarted","Data":"e150555b2299bd2a9e103444df83bebe15997d980cb6bd557bcd06e0e7b5fc65"} Feb 18 16:43:08 crc kubenswrapper[4812]: I0218 16:43:08.254691 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:08 crc kubenswrapper[4812]: I0218 16:43:08.255093 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:09 crc kubenswrapper[4812]: I0218 16:43:09.299051 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5dg7q" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="registry-server" probeResult="failure" output=< Feb 18 16:43:09 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:43:09 crc kubenswrapper[4812]: > Feb 18 16:43:09 crc kubenswrapper[4812]: I0218 16:43:09.327772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" event={"ID":"7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58","Type":"ContainerStarted","Data":"9b25a845957ed7ed6d4b2e6421b8507f17eef40c5fa13b4f39c314cf87450c31"} Feb 18 16:43:10 crc kubenswrapper[4812]: I0218 16:43:10.341050 4812 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-8l5sf container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 16:43:10 crc kubenswrapper[4812]: I0218 16:43:10.341172 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" podUID="38d2ae21-5a2d-42e7-8beb-e03bc7354dbe" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:43:10 crc kubenswrapper[4812]: I0218 16:43:10.341050 4812 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-8l5sf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 16:43:10 crc kubenswrapper[4812]: I0218 16:43:10.341287 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-8l5sf" podUID="38d2ae21-5a2d-42e7-8beb-e03bc7354dbe" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.255302 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-wj667" podStartSLOduration=3.933759909 podStartE2EDuration="7.255270932s" podCreationTimestamp="2026-02-18 16:43:04 +0000 UTC" firstStartedPulling="2026-02-18 16:43:05.305852011 +0000 UTC m=+805.571462920" lastFinishedPulling="2026-02-18 16:43:08.627363034 +0000 UTC m=+808.892973943" observedRunningTime="2026-02-18 16:43:09.352057597 +0000 UTC m=+809.617668506" watchObservedRunningTime="2026-02-18 16:43:11.255270932 +0000 UTC m=+811.520881841" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.257083 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.258675 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.262488 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8lk4g" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.274751 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.285039 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.286075 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.323138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhfs\" (UniqueName: \"kubernetes.io/projected/4eeb831e-7c1b-4a4b-ab6f-de2702714fa7-kube-api-access-jrhfs\") pod \"nmstate-metrics-58c85c668d-k5jtp\" (UID: \"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.323349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dt6\" (UniqueName: \"kubernetes.io/projected/c3fe979f-501c-40cb-a1c4-e84fb119d112-kube-api-access-z8dt6\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.323447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.424361 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhfs\" (UniqueName: \"kubernetes.io/projected/4eeb831e-7c1b-4a4b-ab6f-de2702714fa7-kube-api-access-jrhfs\") pod \"nmstate-metrics-58c85c668d-k5jtp\" (UID: \"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.433158 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dt6\" (UniqueName: \"kubernetes.io/projected/c3fe979f-501c-40cb-a1c4-e84fb119d112-kube-api-access-z8dt6\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.433209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.483477 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 16:43:11 crc kubenswrapper[4812]: E0218 16:43:11.483742 4812 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 16:43:11 crc kubenswrapper[4812]: E0218 16:43:11.485117 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair podName:c3fe979f-501c-40cb-a1c4-e84fb119d112 nodeName:}" failed. No retries permitted until 2026-02-18 16:43:11.985060822 +0000 UTC m=+812.250671731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair") pod "nmstate-webhook-866bcb46dc-stnd4" (UID: "c3fe979f-501c-40cb-a1c4-e84fb119d112") : secret "openshift-nmstate-webhook" not found Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.509350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhfs\" (UniqueName: \"kubernetes.io/projected/4eeb831e-7c1b-4a4b-ab6f-de2702714fa7-kube-api-access-jrhfs\") pod \"nmstate-metrics-58c85c668d-k5jtp\" (UID: \"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.520392 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-f446z"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.521444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dt6\" (UniqueName: \"kubernetes.io/projected/c3fe979f-501c-40cb-a1c4-e84fb119d112-kube-api-access-z8dt6\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.521987 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.534936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-dbus-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.536736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-nmstate-lock\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.536982 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnbf\" (UniqueName: \"kubernetes.io/projected/6a4fd47f-29ab-45bd-86d8-865d91e44d02-kube-api-access-hnnbf\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.537223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-ovs-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.542725 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.638678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-dbus-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.638794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-nmstate-lock\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.638858 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnbf\" (UniqueName: \"kubernetes.io/projected/6a4fd47f-29ab-45bd-86d8-865d91e44d02-kube-api-access-hnnbf\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.638922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-ovs-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.638978 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-nmstate-lock\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.639030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-ovs-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.639568 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6a4fd47f-29ab-45bd-86d8-865d91e44d02-dbus-socket\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.671849 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.673059 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.679794 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.679858 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.680231 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fwsbj" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.680328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnbf\" (UniqueName: \"kubernetes.io/projected/6a4fd47f-29ab-45bd-86d8-865d91e44d02-kube-api-access-hnnbf\") pod \"nmstate-handler-f446z\" (UID: \"6a4fd47f-29ab-45bd-86d8-865d91e44d02\") " pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.686476 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.740513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1a0e4522-b91b-47b6-a36c-902c8e98a845-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.740603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzpvx\" (UniqueName: \"kubernetes.io/projected/1a0e4522-b91b-47b6-a36c-902c8e98a845-kube-api-access-tzpvx\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.740746 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0e4522-b91b-47b6-a36c-902c8e98a845-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.784627 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.841923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0e4522-b91b-47b6-a36c-902c8e98a845-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.842021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1a0e4522-b91b-47b6-a36c-902c8e98a845-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.842065 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzpvx\" (UniqueName: \"kubernetes.io/projected/1a0e4522-b91b-47b6-a36c-902c8e98a845-kube-api-access-tzpvx\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.843373 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1a0e4522-b91b-47b6-a36c-902c8e98a845-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.846687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0e4522-b91b-47b6-a36c-902c8e98a845-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.860028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzpvx\" (UniqueName: \"kubernetes.io/projected/1a0e4522-b91b-47b6-a36c-902c8e98a845-kube-api-access-tzpvx\") pod \"nmstate-console-plugin-5c78fc5d65-7s5pc\" (UID: \"1a0e4522-b91b-47b6-a36c-902c8e98a845\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.893804 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.942987 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cf6759558-lnfw9"] Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.944257 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:11 crc kubenswrapper[4812]: I0218 16:43:11.949626 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cf6759558-lnfw9"] Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.001654 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.037035 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp"] Feb 18 16:43:12 crc kubenswrapper[4812]: W0218 16:43:12.039349 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eeb831e_7c1b_4a4b_ab6f_de2702714fa7.slice/crio-58115d68cb1ed15c161fbfa5020376e39bcc946e90faeee11a689e396077b289 WatchSource:0}: Error finding container 58115d68cb1ed15c161fbfa5020376e39bcc946e90faeee11a689e396077b289: Status 404 returned error can't find the container with id 58115d68cb1ed15c161fbfa5020376e39bcc946e90faeee11a689e396077b289 Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-oauth-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044233 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-trusted-ca-bundle\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7s74\" (UniqueName: \"kubernetes.io/projected/c86399b5-1402-4bdd-9529-958e6fb80304-kube-api-access-k7s74\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-console-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044378 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044403 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-service-ca\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.044438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-oauth-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.051339 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c3fe979f-501c-40cb-a1c4-e84fb119d112-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-stnd4\" (UID: \"c3fe979f-501c-40cb-a1c4-e84fb119d112\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.097719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-service-ca\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146789 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-oauth-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-oauth-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146844 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-trusted-ca-bundle\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7s74\" (UniqueName: \"kubernetes.io/projected/c86399b5-1402-4bdd-9529-958e6fb80304-kube-api-access-k7s74\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.146902 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-console-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.147930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-console-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.149588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-oauth-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.150252 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-service-ca\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.151302 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c86399b5-1402-4bdd-9529-958e6fb80304-trusted-ca-bundle\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.153045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-serving-cert\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.156473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c86399b5-1402-4bdd-9529-958e6fb80304-console-oauth-config\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.169719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7s74\" (UniqueName: \"kubernetes.io/projected/c86399b5-1402-4bdd-9529-958e6fb80304-kube-api-access-k7s74\") pod \"console-5cf6759558-lnfw9\" (UID: \"c86399b5-1402-4bdd-9529-958e6fb80304\") " pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.204489 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc"] Feb 18 16:43:12 crc kubenswrapper[4812]: W0218 16:43:12.214855 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a0e4522_b91b_47b6_a36c_902c8e98a845.slice/crio-431e73f1543ecb1075eede2e05c2f5d1532d9530046c232f1d9e1c35777498de WatchSource:0}: Error finding container 431e73f1543ecb1075eede2e05c2f5d1532d9530046c232f1d9e1c35777498de: Status 404 returned error can't find the container with id 431e73f1543ecb1075eede2e05c2f5d1532d9530046c232f1d9e1c35777498de Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.265377 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.501737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" event={"ID":"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7","Type":"ContainerStarted","Data":"58115d68cb1ed15c161fbfa5020376e39bcc946e90faeee11a689e396077b289"} Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.505433 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f446z" event={"ID":"6a4fd47f-29ab-45bd-86d8-865d91e44d02","Type":"ContainerStarted","Data":"e3cd5eebef796fe15c678b5e32469c5073e9fb58abc8ec7d30e7743522e5a636"} Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.519662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" event={"ID":"1a0e4522-b91b-47b6-a36c-902c8e98a845","Type":"ContainerStarted","Data":"431e73f1543ecb1075eede2e05c2f5d1532d9530046c232f1d9e1c35777498de"} Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.524826 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cf6759558-lnfw9"] Feb 18 16:43:12 crc kubenswrapper[4812]: I0218 16:43:12.539394 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4"] Feb 18 16:43:12 crc kubenswrapper[4812]: W0218 16:43:12.543470 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86399b5_1402_4bdd_9529_958e6fb80304.slice/crio-d0962d5a32a9547b79dd91c7feb665d514e3f25298b5846bad4e5054268ddf7b WatchSource:0}: Error finding container d0962d5a32a9547b79dd91c7feb665d514e3f25298b5846bad4e5054268ddf7b: Status 404 returned error can't find the container with id d0962d5a32a9547b79dd91c7feb665d514e3f25298b5846bad4e5054268ddf7b Feb 18 16:43:12 crc kubenswrapper[4812]: W0218 16:43:12.544971 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fe979f_501c_40cb_a1c4_e84fb119d112.slice/crio-296cbadb8ca647e071f0ba2c5844ac7c6d75e9b2f861adda9932c667a55e3938 WatchSource:0}: Error finding container 296cbadb8ca647e071f0ba2c5844ac7c6d75e9b2f861adda9932c667a55e3938: Status 404 returned error can't find the container with id 296cbadb8ca647e071f0ba2c5844ac7c6d75e9b2f861adda9932c667a55e3938 Feb 18 16:43:13 crc kubenswrapper[4812]: I0218 16:43:13.517459 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" event={"ID":"c3fe979f-501c-40cb-a1c4-e84fb119d112","Type":"ContainerStarted","Data":"296cbadb8ca647e071f0ba2c5844ac7c6d75e9b2f861adda9932c667a55e3938"} Feb 18 16:43:13 crc kubenswrapper[4812]: I0218 16:43:13.520883 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf6759558-lnfw9" event={"ID":"c86399b5-1402-4bdd-9529-958e6fb80304","Type":"ContainerStarted","Data":"4dcbe9a30dc9e3d4e117ab96855899817770793bcd55a128fae6ea9652abd01b"} Feb 18 16:43:13 crc kubenswrapper[4812]: I0218 16:43:13.520990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cf6759558-lnfw9" event={"ID":"c86399b5-1402-4bdd-9529-958e6fb80304","Type":"ContainerStarted","Data":"d0962d5a32a9547b79dd91c7feb665d514e3f25298b5846bad4e5054268ddf7b"} Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.456453 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.457966 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.486333 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.581400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljkb\" (UniqueName: \"kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.581512 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.581731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.683492 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.683627 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.683773 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljkb\" (UniqueName: \"kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.684073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.684606 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.709045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljkb\" (UniqueName: \"kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb\") pod \"community-operators-hb4zz\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:14 crc kubenswrapper[4812]: I0218 16:43:14.781187 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:15 crc kubenswrapper[4812]: I0218 16:43:15.386626 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cf6759558-lnfw9" podStartSLOduration=4.386609209 podStartE2EDuration="4.386609209s" podCreationTimestamp="2026-02-18 16:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:43:14.557408687 +0000 UTC m=+814.823019606" watchObservedRunningTime="2026-02-18 16:43:15.386609209 +0000 UTC m=+815.652220118" Feb 18 16:43:15 crc kubenswrapper[4812]: I0218 16:43:15.390713 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:15 crc kubenswrapper[4812]: I0218 16:43:15.541210 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerStarted","Data":"f4cbe557a5ef1744c9228d196c03e5b83d87118304ef3f3c14bc366b3b0a475d"} Feb 18 16:43:16 crc kubenswrapper[4812]: I0218 16:43:16.554929 4812 generic.go:334] "Generic (PLEG): container finished" podID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerID="06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f" exitCode=0 Feb 18 16:43:16 crc kubenswrapper[4812]: I0218 16:43:16.555080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerDied","Data":"06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f"} Feb 18 16:43:18 crc kubenswrapper[4812]: I0218 16:43:18.362063 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:18 crc kubenswrapper[4812]: I0218 16:43:18.419544 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:18 crc kubenswrapper[4812]: I0218 16:43:18.773290 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:43:20 crc kubenswrapper[4812]: I0218 16:43:20.714448 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dg7q" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="registry-server" containerID="cri-o://075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6" gracePeriod=2 Feb 18 16:43:21 crc kubenswrapper[4812]: E0218 16:43:21.220724 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575ad334_bbf7_42b7_9268_18ed15f551d0.slice/crio-conmon-075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:43:21 crc kubenswrapper[4812]: I0218 16:43:21.711746 4812 generic.go:334] "Generic (PLEG): container finished" podID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerID="075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6" exitCode=0 Feb 18 16:43:21 crc kubenswrapper[4812]: I0218 16:43:21.711792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerDied","Data":"075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.198272 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.265694 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.265989 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.272013 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.319083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k9cv\" (UniqueName: \"kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv\") pod \"575ad334-bbf7-42b7-9268-18ed15f551d0\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.319204 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content\") pod \"575ad334-bbf7-42b7-9268-18ed15f551d0\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.319247 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities\") pod \"575ad334-bbf7-42b7-9268-18ed15f551d0\" (UID: \"575ad334-bbf7-42b7-9268-18ed15f551d0\") " Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.321174 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities" (OuterVolumeSpecName: "utilities") pod "575ad334-bbf7-42b7-9268-18ed15f551d0" (UID: "575ad334-bbf7-42b7-9268-18ed15f551d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.325277 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv" (OuterVolumeSpecName: "kube-api-access-7k9cv") pod "575ad334-bbf7-42b7-9268-18ed15f551d0" (UID: "575ad334-bbf7-42b7-9268-18ed15f551d0"). InnerVolumeSpecName "kube-api-access-7k9cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.421867 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k9cv\" (UniqueName: \"kubernetes.io/projected/575ad334-bbf7-42b7-9268-18ed15f551d0-kube-api-access-7k9cv\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.421905 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.440776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "575ad334-bbf7-42b7-9268-18ed15f551d0" (UID: "575ad334-bbf7-42b7-9268-18ed15f551d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.523429 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ad334-bbf7-42b7-9268-18ed15f551d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.720007 4812 generic.go:334] "Generic (PLEG): container finished" podID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerID="8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef" exitCode=0 Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.720237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerDied","Data":"8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.721832 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f446z" event={"ID":"6a4fd47f-29ab-45bd-86d8-865d91e44d02","Type":"ContainerStarted","Data":"413fbf5180495a206d5af1a03b1b0365a64178ff1c85b23b747f78a18238adc5"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.722330 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.724743 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" event={"ID":"1a0e4522-b91b-47b6-a36c-902c8e98a845","Type":"ContainerStarted","Data":"d0bdcf2f84174c843eb5c4324eda6c6a846131f73ac675a51137adc0f815fa77"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.726666 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" event={"ID":"c3fe979f-501c-40cb-a1c4-e84fb119d112","Type":"ContainerStarted","Data":"367f0d0c42f52e4b5f28a276a859c6f484f406ca05dd7f66378497d8f3fdd156"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.727075 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.728604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" event={"ID":"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7","Type":"ContainerStarted","Data":"89d80a72543b029d01784392f2125aaef925b061c81410ed8b489a7d7b580cf9"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.731667 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dg7q" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.731910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dg7q" event={"ID":"575ad334-bbf7-42b7-9268-18ed15f551d0","Type":"ContainerDied","Data":"092094b3a98cc889781dff94250fc1969876968bf00b074e1c471e8acae3dcea"} Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.731949 4812 scope.go:117] "RemoveContainer" containerID="075a9015f8eddf6c6bd3fd4e2bd235113e385c1e3af6f66e7e43558a77904bd6" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.735915 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cf6759558-lnfw9" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.758139 4812 scope.go:117] "RemoveContainer" containerID="fa590dc54d84b133510f3971de088cd5e66f17b311389944093d9c73d44cb8b6" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.791271 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-7s5pc" podStartSLOduration=2.120570761 podStartE2EDuration="11.79124921s" podCreationTimestamp="2026-02-18 16:43:11 +0000 UTC" firstStartedPulling="2026-02-18 16:43:12.217566158 +0000 UTC m=+812.483177067" lastFinishedPulling="2026-02-18 16:43:21.888244607 +0000 UTC m=+822.153855516" observedRunningTime="2026-02-18 16:43:22.769311729 +0000 UTC m=+823.034922638" watchObservedRunningTime="2026-02-18 16:43:22.79124921 +0000 UTC m=+823.056860119" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.793509 4812 scope.go:117] "RemoveContainer" containerID="b0a5ff2c4d6c428b9ddfe8500741e76367e651009ff0396f7766014e09910dcf" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.800003 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.806625 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dg7q"] Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.810683 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" podStartSLOduration=2.331106966 podStartE2EDuration="11.810663719s" podCreationTimestamp="2026-02-18 16:43:11 +0000 UTC" firstStartedPulling="2026-02-18 16:43:12.549349555 +0000 UTC m=+812.814960464" lastFinishedPulling="2026-02-18 16:43:22.028906308 +0000 UTC m=+822.294517217" observedRunningTime="2026-02-18 16:43:22.802497768 +0000 UTC m=+823.068108687" watchObservedRunningTime="2026-02-18 16:43:22.810663719 +0000 UTC m=+823.076274628" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.830428 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-f446z" podStartSLOduration=1.8739377849999999 podStartE2EDuration="11.830399166s" podCreationTimestamp="2026-02-18 16:43:11 +0000 UTC" firstStartedPulling="2026-02-18 16:43:11.960404622 +0000 UTC m=+812.226015531" lastFinishedPulling="2026-02-18 16:43:21.916866003 +0000 UTC m=+822.182476912" observedRunningTime="2026-02-18 16:43:22.819835226 +0000 UTC m=+823.085446135" watchObservedRunningTime="2026-02-18 16:43:22.830399166 +0000 UTC m=+823.096010085" Feb 18 16:43:22 crc kubenswrapper[4812]: I0218 16:43:22.879511 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:43:24 crc kubenswrapper[4812]: I0218 16:43:24.518067 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" path="/var/lib/kubelet/pods/575ad334-bbf7-42b7-9268-18ed15f551d0/volumes" Feb 18 16:43:24 crc kubenswrapper[4812]: I0218 16:43:24.752520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerStarted","Data":"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548"} Feb 18 16:43:24 crc kubenswrapper[4812]: I0218 16:43:24.773876 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hb4zz" podStartSLOduration=3.61664278 podStartE2EDuration="10.773853965s" podCreationTimestamp="2026-02-18 16:43:14 +0000 UTC" firstStartedPulling="2026-02-18 16:43:16.558009396 +0000 UTC m=+816.823620305" lastFinishedPulling="2026-02-18 16:43:23.715220581 +0000 UTC m=+823.980831490" observedRunningTime="2026-02-18 16:43:24.769880237 +0000 UTC m=+825.035491146" watchObservedRunningTime="2026-02-18 16:43:24.773853965 +0000 UTC m=+825.039464874" Feb 18 16:43:24 crc kubenswrapper[4812]: I0218 16:43:24.781831 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:24 crc kubenswrapper[4812]: I0218 16:43:24.782028 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:25 crc kubenswrapper[4812]: I0218 16:43:25.767624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" event={"ID":"4eeb831e-7c1b-4a4b-ab6f-de2702714fa7","Type":"ContainerStarted","Data":"b3a86207c2ce27c3293304814ec05f9779a1af60a17f75520ea3380c0555415c"} Feb 18 16:43:25 crc kubenswrapper[4812]: I0218 16:43:25.790959 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-k5jtp" podStartSLOduration=2.011747735 podStartE2EDuration="14.790939332s" podCreationTimestamp="2026-02-18 16:43:11 +0000 UTC" firstStartedPulling="2026-02-18 16:43:12.041661667 +0000 UTC m=+812.307272576" lastFinishedPulling="2026-02-18 16:43:24.820853264 +0000 UTC m=+825.086464173" observedRunningTime="2026-02-18 16:43:25.782413192 +0000 UTC m=+826.048024131" watchObservedRunningTime="2026-02-18 16:43:25.790939332 +0000 UTC m=+826.056550241" Feb 18 16:43:25 crc kubenswrapper[4812]: I0218 16:43:25.820885 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hb4zz" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="registry-server" probeResult="failure" output=< Feb 18 16:43:25 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:43:25 crc kubenswrapper[4812]: > Feb 18 16:43:31 crc kubenswrapper[4812]: I0218 16:43:31.922548 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-f446z" Feb 18 16:43:32 crc kubenswrapper[4812]: I0218 16:43:32.469332 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-stnd4" Feb 18 16:43:34 crc kubenswrapper[4812]: I0218 16:43:34.825877 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:34 crc kubenswrapper[4812]: I0218 16:43:34.875409 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:35 crc kubenswrapper[4812]: I0218 16:43:35.068626 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:36 crc kubenswrapper[4812]: I0218 16:43:36.839352 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hb4zz" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="registry-server" containerID="cri-o://6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548" gracePeriod=2 Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.219264 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.373011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content\") pod \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.373232 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ljkb\" (UniqueName: \"kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb\") pod \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.373344 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities\") pod \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\" (UID: \"079108b3-3677-4bd7-8ad0-bc8b98a78a84\") " Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.374812 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities" (OuterVolumeSpecName: "utilities") pod "079108b3-3677-4bd7-8ad0-bc8b98a78a84" (UID: "079108b3-3677-4bd7-8ad0-bc8b98a78a84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.380332 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb" (OuterVolumeSpecName: "kube-api-access-7ljkb") pod "079108b3-3677-4bd7-8ad0-bc8b98a78a84" (UID: "079108b3-3677-4bd7-8ad0-bc8b98a78a84"). InnerVolumeSpecName "kube-api-access-7ljkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.426138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "079108b3-3677-4bd7-8ad0-bc8b98a78a84" (UID: "079108b3-3677-4bd7-8ad0-bc8b98a78a84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.474494 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ljkb\" (UniqueName: \"kubernetes.io/projected/079108b3-3677-4bd7-8ad0-bc8b98a78a84-kube-api-access-7ljkb\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.474541 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.474556 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079108b3-3677-4bd7-8ad0-bc8b98a78a84-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.849949 4812 generic.go:334] "Generic (PLEG): container finished" podID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerID="6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548" exitCode=0 Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.850009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerDied","Data":"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548"} Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.850060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb4zz" event={"ID":"079108b3-3677-4bd7-8ad0-bc8b98a78a84","Type":"ContainerDied","Data":"f4cbe557a5ef1744c9228d196c03e5b83d87118304ef3f3c14bc366b3b0a475d"} Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.850090 4812 scope.go:117] "RemoveContainer" containerID="6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.850288 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb4zz" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.873038 4812 scope.go:117] "RemoveContainer" containerID="8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.897727 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.902158 4812 scope.go:117] "RemoveContainer" containerID="06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.904869 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hb4zz"] Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.923074 4812 scope.go:117] "RemoveContainer" containerID="6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548" Feb 18 16:43:37 crc kubenswrapper[4812]: E0218 16:43:37.923409 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548\": container with ID starting with 6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548 not found: ID does not exist" containerID="6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.923452 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548"} err="failed to get container status \"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548\": rpc error: code = NotFound desc = could not find container \"6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548\": container with ID starting with 6f104863e1a5bf8f43dbe6a1e02f048f0f6a5d68a5bd7f31e6ced1488d9a8548 not found: ID does not exist" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.923473 4812 scope.go:117] "RemoveContainer" containerID="8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef" Feb 18 16:43:37 crc kubenswrapper[4812]: E0218 16:43:37.923814 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef\": container with ID starting with 8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef not found: ID does not exist" containerID="8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.923839 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef"} err="failed to get container status \"8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef\": rpc error: code = NotFound desc = could not find container \"8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef\": container with ID starting with 8d8982fa135cb059d02b2868edf4ffaceebd0254c2cff32b2dfbefc7d64f56ef not found: ID does not exist" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.923852 4812 scope.go:117] "RemoveContainer" containerID="06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f" Feb 18 16:43:37 crc kubenswrapper[4812]: E0218 16:43:37.924025 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f\": container with ID starting with 06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f not found: ID does not exist" containerID="06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f" Feb 18 16:43:37 crc kubenswrapper[4812]: I0218 16:43:37.924043 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f"} err="failed to get container status \"06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f\": rpc error: code = NotFound desc = could not find container \"06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f\": container with ID starting with 06d865546abfe4c683d8fc75f2cbfe60840dfbc2bcab103bf911fdbc7674813f not found: ID does not exist" Feb 18 16:43:38 crc kubenswrapper[4812]: I0218 16:43:38.517637 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" path="/var/lib/kubelet/pods/079108b3-3677-4bd7-8ad0-bc8b98a78a84/volumes" Feb 18 16:43:47 crc kubenswrapper[4812]: I0218 16:43:47.923542 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-blqkx" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerName="console" containerID="cri-o://6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c" gracePeriod=15 Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.247132 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-blqkx_2ee898c2-0a23-41cb-a680-709b6e8104ff/console/0.log" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.247511 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.347859 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.347912 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5kfq\" (UniqueName: \"kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.347965 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.348003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.348026 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.348046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.348079 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca\") pod \"2ee898c2-0a23-41cb-a680-709b6e8104ff\" (UID: \"2ee898c2-0a23-41cb-a680-709b6e8104ff\") " Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.349382 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config" (OuterVolumeSpecName: "console-config") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.349548 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca" (OuterVolumeSpecName: "service-ca") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.349896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.350916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.355327 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.356329 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq" (OuterVolumeSpecName: "kube-api-access-w5kfq") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "kube-api-access-w5kfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.356330 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2ee898c2-0a23-41cb-a680-709b6e8104ff" (UID: "2ee898c2-0a23-41cb-a680-709b6e8104ff"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449224 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449264 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5kfq\" (UniqueName: \"kubernetes.io/projected/2ee898c2-0a23-41cb-a680-709b6e8104ff-kube-api-access-w5kfq\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449279 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449291 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449305 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449317 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.449329 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ee898c2-0a23-41cb-a680-709b6e8104ff-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928013 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-blqkx_2ee898c2-0a23-41cb-a680-709b6e8104ff/console/0.log" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928403 4812 generic.go:334] "Generic (PLEG): container finished" podID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerID="6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c" exitCode=2 Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-blqkx" event={"ID":"2ee898c2-0a23-41cb-a680-709b6e8104ff","Type":"ContainerDied","Data":"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c"} Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928465 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-blqkx" event={"ID":"2ee898c2-0a23-41cb-a680-709b6e8104ff","Type":"ContainerDied","Data":"b8c5d5c4080b200667b461f179a09dd4ad91aeaab254d2e5de209a5214b035aa"} Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928484 4812 scope.go:117] "RemoveContainer" containerID="6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.928480 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-blqkx" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.948434 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.955645 4812 scope.go:117] "RemoveContainer" containerID="6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c" Feb 18 16:43:48 crc kubenswrapper[4812]: E0218 16:43:48.956495 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c\": container with ID starting with 6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c not found: ID does not exist" containerID="6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.956530 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c"} err="failed to get container status \"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c\": rpc error: code = NotFound desc = could not find container \"6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c\": container with ID starting with 6398db97ccb498fa6565191d95c1a35964cb70dc7f790a12eb70efdb37b3773c not found: ID does not exist" Feb 18 16:43:48 crc kubenswrapper[4812]: I0218 16:43:48.957490 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-blqkx"] Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320032 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn"] Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320276 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320289 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320304 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="extract-utilities" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320310 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="extract-utilities" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320320 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerName="console" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320327 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerName="console" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320337 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320343 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320360 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="extract-content" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320365 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="extract-content" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320374 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="extract-content" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320380 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="extract-content" Feb 18 16:43:49 crc kubenswrapper[4812]: E0218 16:43:49.320389 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="extract-utilities" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320395 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="extract-utilities" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320485 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="079108b3-3677-4bd7-8ad0-bc8b98a78a84" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320501 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="575ad334-bbf7-42b7-9268-18ed15f551d0" containerName="registry-server" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.320511 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" containerName="console" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.321269 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.325538 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.332921 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn"] Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.362565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9r7\" (UniqueName: \"kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.362696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.362818 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.463619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.464020 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9r7\" (UniqueName: \"kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.464185 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.464425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.465173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.481688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9r7\" (UniqueName: \"kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:49 crc kubenswrapper[4812]: I0218 16:43:49.635924 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:50 crc kubenswrapper[4812]: I0218 16:43:50.053348 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn"] Feb 18 16:43:50 crc kubenswrapper[4812]: I0218 16:43:50.515185 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee898c2-0a23-41cb-a680-709b6e8104ff" path="/var/lib/kubelet/pods/2ee898c2-0a23-41cb-a680-709b6e8104ff/volumes" Feb 18 16:43:50 crc kubenswrapper[4812]: I0218 16:43:50.948865 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerID="754016491c295c4cf09446a2c4fab3cbb9c07b75b0d5853000d308b98ffb3c90" exitCode=0 Feb 18 16:43:50 crc kubenswrapper[4812]: I0218 16:43:50.948942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" event={"ID":"ec6d2ba5-6719-472f-a1b5-e5d0bd746608","Type":"ContainerDied","Data":"754016491c295c4cf09446a2c4fab3cbb9c07b75b0d5853000d308b98ffb3c90"} Feb 18 16:43:50 crc kubenswrapper[4812]: I0218 16:43:50.949514 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" event={"ID":"ec6d2ba5-6719-472f-a1b5-e5d0bd746608","Type":"ContainerStarted","Data":"c1b90a144af8c16477c85ecd1c364737fd558b8241f3917865b65c6bd9f27d4d"} Feb 18 16:43:53 crc kubenswrapper[4812]: I0218 16:43:53.968714 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerID="35c2c1c68823fd98ee0680033ce27e9282fad10ffa3692fbd0b86d803ef63088" exitCode=0 Feb 18 16:43:53 crc kubenswrapper[4812]: I0218 16:43:53.968800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" event={"ID":"ec6d2ba5-6719-472f-a1b5-e5d0bd746608","Type":"ContainerDied","Data":"35c2c1c68823fd98ee0680033ce27e9282fad10ffa3692fbd0b86d803ef63088"} Feb 18 16:43:54 crc kubenswrapper[4812]: I0218 16:43:54.996743 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerID="5103e97ad41f90e142985f9c0045607e0713efbcff9e14574c4ea9419dbb8e8e" exitCode=0 Feb 18 16:43:54 crc kubenswrapper[4812]: I0218 16:43:54.996874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" event={"ID":"ec6d2ba5-6719-472f-a1b5-e5d0bd746608","Type":"ContainerDied","Data":"5103e97ad41f90e142985f9c0045607e0713efbcff9e14574c4ea9419dbb8e8e"} Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.288865 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.358331 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util\") pod \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.358844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj9r7\" (UniqueName: \"kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7\") pod \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.358986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle\") pod \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\" (UID: \"ec6d2ba5-6719-472f-a1b5-e5d0bd746608\") " Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.360523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle" (OuterVolumeSpecName: "bundle") pod "ec6d2ba5-6719-472f-a1b5-e5d0bd746608" (UID: "ec6d2ba5-6719-472f-a1b5-e5d0bd746608"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.370876 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util" (OuterVolumeSpecName: "util") pod "ec6d2ba5-6719-472f-a1b5-e5d0bd746608" (UID: "ec6d2ba5-6719-472f-a1b5-e5d0bd746608"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.373331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7" (OuterVolumeSpecName: "kube-api-access-vj9r7") pod "ec6d2ba5-6719-472f-a1b5-e5d0bd746608" (UID: "ec6d2ba5-6719-472f-a1b5-e5d0bd746608"). InnerVolumeSpecName "kube-api-access-vj9r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.460303 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-util\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.460368 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj9r7\" (UniqueName: \"kubernetes.io/projected/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-kube-api-access-vj9r7\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:56 crc kubenswrapper[4812]: I0218 16:43:56.460383 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec6d2ba5-6719-472f-a1b5-e5d0bd746608-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:43:57 crc kubenswrapper[4812]: I0218 16:43:57.015329 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" event={"ID":"ec6d2ba5-6719-472f-a1b5-e5d0bd746608","Type":"ContainerDied","Data":"c1b90a144af8c16477c85ecd1c364737fd558b8241f3917865b65c6bd9f27d4d"} Feb 18 16:43:57 crc kubenswrapper[4812]: I0218 16:43:57.015397 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b90a144af8c16477c85ecd1c364737fd558b8241f3917865b65c6bd9f27d4d" Feb 18 16:43:57 crc kubenswrapper[4812]: I0218 16:43:57.015371 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn" Feb 18 16:44:03 crc kubenswrapper[4812]: I0218 16:44:03.413901 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:44:03 crc kubenswrapper[4812]: I0218 16:44:03.414384 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.058149 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9"] Feb 18 16:44:05 crc kubenswrapper[4812]: E0218 16:44:05.059480 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="pull" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.059583 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="pull" Feb 18 16:44:05 crc kubenswrapper[4812]: E0218 16:44:05.059699 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="extract" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.059780 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="extract" Feb 18 16:44:05 crc kubenswrapper[4812]: E0218 16:44:05.059865 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="util" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.059933 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="util" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.060170 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6d2ba5-6719-472f-a1b5-e5d0bd746608" containerName="extract" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.060841 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.062878 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.063071 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.063541 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.063675 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zftnn" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.063768 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.079918 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9"] Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.187652 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-webhook-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.187764 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-apiservice-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.187834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97b62\" (UniqueName: \"kubernetes.io/projected/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-kube-api-access-97b62\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.289344 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-webhook-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.289678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-apiservice-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.289823 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97b62\" (UniqueName: \"kubernetes.io/projected/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-kube-api-access-97b62\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.295750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-apiservice-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.297616 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-webhook-cert\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.321270 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97b62\" (UniqueName: \"kubernetes.io/projected/59215e89-cf30-4ef8-ab0d-fe665a3b2d70-kube-api-access-97b62\") pod \"metallb-operator-controller-manager-8495744bb9-sfmg9\" (UID: \"59215e89-cf30-4ef8-ab0d-fe665a3b2d70\") " pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.378993 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.414324 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c"] Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.415401 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.419450 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.420063 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mgvvx" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.422924 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.433845 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c"] Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.594944 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-apiservice-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.595006 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqv4z\" (UniqueName: \"kubernetes.io/projected/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-kube-api-access-xqv4z\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.595030 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-webhook-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.696354 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-apiservice-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.696428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqv4z\" (UniqueName: \"kubernetes.io/projected/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-kube-api-access-xqv4z\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.696450 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-webhook-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.701852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-webhook-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.703205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-apiservice-cert\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.738923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqv4z\" (UniqueName: \"kubernetes.io/projected/ce82a09a-a70f-41f4-a4d6-15c1edc08d5e-kube-api-access-xqv4z\") pod \"metallb-operator-webhook-server-84fb85c775-sdk9c\" (UID: \"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e\") " pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.775896 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9"] Feb 18 16:44:05 crc kubenswrapper[4812]: I0218 16:44:05.789118 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:06 crc kubenswrapper[4812]: I0218 16:44:06.068721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" event={"ID":"59215e89-cf30-4ef8-ab0d-fe665a3b2d70","Type":"ContainerStarted","Data":"24b3e6c5809a27739b257a6fb7bdc380e7a2e89c786370eae3bcbc903d0beb08"} Feb 18 16:44:06 crc kubenswrapper[4812]: I0218 16:44:06.089897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c"] Feb 18 16:44:07 crc kubenswrapper[4812]: I0218 16:44:07.077978 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" event={"ID":"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e","Type":"ContainerStarted","Data":"b6a60a45a0173c4b3ba4206bc0c757c467751343b80ef2692234a1e553c18d96"} Feb 18 16:44:12 crc kubenswrapper[4812]: I0218 16:44:12.120468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" event={"ID":"ce82a09a-a70f-41f4-a4d6-15c1edc08d5e","Type":"ContainerStarted","Data":"77b81b967f1d580069e652d49abcadff3028104473936dafcb94ab24bd759529"} Feb 18 16:44:12 crc kubenswrapper[4812]: I0218 16:44:12.121050 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:12 crc kubenswrapper[4812]: I0218 16:44:12.143237 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" podStartSLOduration=1.7828472309999999 podStartE2EDuration="7.143214973s" podCreationTimestamp="2026-02-18 16:44:05 +0000 UTC" firstStartedPulling="2026-02-18 16:44:06.098806656 +0000 UTC m=+866.364417565" lastFinishedPulling="2026-02-18 16:44:11.459174398 +0000 UTC m=+871.724785307" observedRunningTime="2026-02-18 16:44:12.139913341 +0000 UTC m=+872.405524250" watchObservedRunningTime="2026-02-18 16:44:12.143214973 +0000 UTC m=+872.408825882" Feb 18 16:44:19 crc kubenswrapper[4812]: I0218 16:44:19.163754 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" event={"ID":"59215e89-cf30-4ef8-ab0d-fe665a3b2d70","Type":"ContainerStarted","Data":"4342efe8d2379ea61e76d9110f16866c9d1ca8936ca434e7d1a91b6e7644b420"} Feb 18 16:44:19 crc kubenswrapper[4812]: I0218 16:44:19.165384 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:19 crc kubenswrapper[4812]: I0218 16:44:19.183063 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" podStartSLOduration=1.472280124 podStartE2EDuration="14.183044246s" podCreationTimestamp="2026-02-18 16:44:05 +0000 UTC" firstStartedPulling="2026-02-18 16:44:05.790200396 +0000 UTC m=+866.055811305" lastFinishedPulling="2026-02-18 16:44:18.500964518 +0000 UTC m=+878.766575427" observedRunningTime="2026-02-18 16:44:19.181382395 +0000 UTC m=+879.446993304" watchObservedRunningTime="2026-02-18 16:44:19.183044246 +0000 UTC m=+879.448655145" Feb 18 16:44:25 crc kubenswrapper[4812]: I0218 16:44:25.794682 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84fb85c775-sdk9c" Feb 18 16:44:33 crc kubenswrapper[4812]: I0218 16:44:33.414420 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:44:33 crc kubenswrapper[4812]: I0218 16:44:33.415237 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:44:55 crc kubenswrapper[4812]: I0218 16:44:55.385325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8495744bb9-sfmg9" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.232318 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7w2pt"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.235876 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.238807 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.238839 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xwz6k" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.238811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.240089 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.241414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.243016 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.264207 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.352745 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7ctsm"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.353895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.360384 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.360433 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.360507 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.367263 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sx4bv" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.373683 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-8dwvz"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.375496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.379135 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.389542 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8dwvz"] Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.435933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-reloader\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.435988 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx65\" (UniqueName: \"kubernetes.io/projected/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-kube-api-access-kmx65\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436011 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-sockets\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-startup\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436529 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436772 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kts4n\" (UniqueName: \"kubernetes.io/projected/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-kube-api-access-kts4n\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436823 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-conf\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.436908 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics-certs\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.537847 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-metrics-certs\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.537905 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx65\" (UniqueName: \"kubernetes.io/projected/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-kube-api-access-kmx65\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.537926 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-sockets\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.537948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvc29\" (UniqueName: \"kubernetes.io/projected/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-kube-api-access-gvc29\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-cert\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/77b4590b-339b-49df-b28e-88be89a335d1-metallb-excludel2\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538294 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-startup\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-metrics-certs\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538452 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-sockets\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538532 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzd4\" (UniqueName: \"kubernetes.io/projected/77b4590b-339b-49df-b28e-88be89a335d1-kube-api-access-frzd4\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538591 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kts4n\" (UniqueName: \"kubernetes.io/projected/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-kube-api-access-kts4n\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538637 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-conf\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics-certs\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-reloader\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.538940 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.539130 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-conf\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.539361 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-frr-startup\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.539430 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-reloader\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.545678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-metrics-certs\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.546330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.565268 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kts4n\" (UniqueName: \"kubernetes.io/projected/0ac2deeb-2838-428e-b648-0f9ea2d0aed5-kube-api-access-kts4n\") pod \"frr-k8s-7w2pt\" (UID: \"0ac2deeb-2838-428e-b648-0f9ea2d0aed5\") " pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.568720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx65\" (UniqueName: \"kubernetes.io/projected/8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896-kube-api-access-kmx65\") pod \"frr-k8s-webhook-server-78b44bf5bb-n67c7\" (UID: \"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639645 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvc29\" (UniqueName: \"kubernetes.io/projected/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-kube-api-access-gvc29\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-cert\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639729 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/77b4590b-339b-49df-b28e-88be89a335d1-metallb-excludel2\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-metrics-certs\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639795 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frzd4\" (UniqueName: \"kubernetes.io/projected/77b4590b-339b-49df-b28e-88be89a335d1-kube-api-access-frzd4\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639827 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.639876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-metrics-certs\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: E0218 16:44:56.639987 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 16:44:56 crc kubenswrapper[4812]: E0218 16:44:56.640090 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist podName:77b4590b-339b-49df-b28e-88be89a335d1 nodeName:}" failed. No retries permitted until 2026-02-18 16:44:57.140064002 +0000 UTC m=+917.405675091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist") pod "speaker-7ctsm" (UID: "77b4590b-339b-49df-b28e-88be89a335d1") : secret "metallb-memberlist" not found Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.640856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/77b4590b-339b-49df-b28e-88be89a335d1-metallb-excludel2\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.643723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-metrics-certs\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.643821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-metrics-certs\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.644710 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.653425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-cert\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.660074 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frzd4\" (UniqueName: \"kubernetes.io/projected/77b4590b-339b-49df-b28e-88be89a335d1-kube-api-access-frzd4\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.664437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvc29\" (UniqueName: \"kubernetes.io/projected/46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4-kube-api-access-gvc29\") pod \"controller-69bbfbf88f-8dwvz\" (UID: \"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4\") " pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.690485 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.857186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:44:56 crc kubenswrapper[4812]: I0218 16:44:56.866711 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.120294 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8dwvz"] Feb 18 16:44:57 crc kubenswrapper[4812]: W0218 16:44:57.125712 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46d04cd0_c5e3_4d50_a2be_e0e6e4070ba4.slice/crio-6f78b44f203ea3631a248589551933862388911b4c8841911e5a93b5f8100c04 WatchSource:0}: Error finding container 6f78b44f203ea3631a248589551933862388911b4c8841911e5a93b5f8100c04: Status 404 returned error can't find the container with id 6f78b44f203ea3631a248589551933862388911b4c8841911e5a93b5f8100c04 Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.146451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:57 crc kubenswrapper[4812]: E0218 16:44:57.146646 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 16:44:57 crc kubenswrapper[4812]: E0218 16:44:57.146753 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist podName:77b4590b-339b-49df-b28e-88be89a335d1 nodeName:}" failed. No retries permitted until 2026-02-18 16:44:58.146720995 +0000 UTC m=+918.412331944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist") pod "speaker-7ctsm" (UID: "77b4590b-339b-49df-b28e-88be89a335d1") : secret "metallb-memberlist" not found Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.313330 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7"] Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.458268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" event={"ID":"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896","Type":"ContainerStarted","Data":"43c5726a717e329042222d868b14b211fd5d8d75e2a339af18c0133882c626e3"} Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.459893 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"317933860af7bd90baa0655864e059055411bdb7b7d10413ea8ae5511cd693fd"} Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.461976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8dwvz" event={"ID":"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4","Type":"ContainerStarted","Data":"a22b6390c004312543d3f8cbd08de83b6433039cedfce80588430e8a955fe014"} Feb 18 16:44:57 crc kubenswrapper[4812]: I0218 16:44:57.462011 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8dwvz" event={"ID":"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4","Type":"ContainerStarted","Data":"6f78b44f203ea3631a248589551933862388911b4c8841911e5a93b5f8100c04"} Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.165002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.173855 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/77b4590b-339b-49df-b28e-88be89a335d1-memberlist\") pod \"speaker-7ctsm\" (UID: \"77b4590b-339b-49df-b28e-88be89a335d1\") " pod="metallb-system/speaker-7ctsm" Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.467836 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7ctsm" Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.472759 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8dwvz" event={"ID":"46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4","Type":"ContainerStarted","Data":"a0db544dc7b6204f8cd7436a1b1529ffb0e0523941b44c7b29dc5af4ae4e7e34"} Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.473351 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:44:58 crc kubenswrapper[4812]: W0218 16:44:58.493761 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b4590b_339b_49df_b28e_88be89a335d1.slice/crio-9de4c4690ef74dee96ce68ec00ec5d2997b5c2e4d7bf9b2104a349be38c4cb78 WatchSource:0}: Error finding container 9de4c4690ef74dee96ce68ec00ec5d2997b5c2e4d7bf9b2104a349be38c4cb78: Status 404 returned error can't find the container with id 9de4c4690ef74dee96ce68ec00ec5d2997b5c2e4d7bf9b2104a349be38c4cb78 Feb 18 16:44:58 crc kubenswrapper[4812]: I0218 16:44:58.496515 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-8dwvz" podStartSLOduration=2.496480435 podStartE2EDuration="2.496480435s" podCreationTimestamp="2026-02-18 16:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:44:58.49222317 +0000 UTC m=+918.757834099" watchObservedRunningTime="2026-02-18 16:44:58.496480435 +0000 UTC m=+918.762091344" Feb 18 16:44:59 crc kubenswrapper[4812]: I0218 16:44:59.480421 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7ctsm" event={"ID":"77b4590b-339b-49df-b28e-88be89a335d1","Type":"ContainerStarted","Data":"e24f1d81d67319d48b9817e0cee5106e2e73f428a1c73393d495e7e0ef46bd2a"} Feb 18 16:44:59 crc kubenswrapper[4812]: I0218 16:44:59.480479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7ctsm" event={"ID":"77b4590b-339b-49df-b28e-88be89a335d1","Type":"ContainerStarted","Data":"3cbd184a42c99d836db21b5bb26d328dcbdc8965e9ac69784350fd47967cdb9b"} Feb 18 16:44:59 crc kubenswrapper[4812]: I0218 16:44:59.480490 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7ctsm" event={"ID":"77b4590b-339b-49df-b28e-88be89a335d1","Type":"ContainerStarted","Data":"9de4c4690ef74dee96ce68ec00ec5d2997b5c2e4d7bf9b2104a349be38c4cb78"} Feb 18 16:44:59 crc kubenswrapper[4812]: I0218 16:44:59.480704 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7ctsm" Feb 18 16:44:59 crc kubenswrapper[4812]: I0218 16:44:59.505285 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7ctsm" podStartSLOduration=3.505261909 podStartE2EDuration="3.505261909s" podCreationTimestamp="2026-02-18 16:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:44:59.503980038 +0000 UTC m=+919.769590947" watchObservedRunningTime="2026-02-18 16:44:59.505261909 +0000 UTC m=+919.770872818" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.174736 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs"] Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.175945 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.178523 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.178862 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.187854 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs"] Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.301791 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.302806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4w8l\" (UniqueName: \"kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.302854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.405002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4w8l\" (UniqueName: \"kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.405067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.405148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.406222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.412956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.426635 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4w8l\" (UniqueName: \"kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l\") pod \"collect-profiles-29523885-hzpcs\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.515820 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:00 crc kubenswrapper[4812]: I0218 16:45:00.746934 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs"] Feb 18 16:45:00 crc kubenswrapper[4812]: W0218 16:45:00.753886 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04518de5_df9f_4e43_b939_89cbfc52a56a.slice/crio-703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da WatchSource:0}: Error finding container 703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da: Status 404 returned error can't find the container with id 703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da Feb 18 16:45:01 crc kubenswrapper[4812]: I0218 16:45:01.501992 4812 generic.go:334] "Generic (PLEG): container finished" podID="04518de5-df9f-4e43-b939-89cbfc52a56a" containerID="f4b92cf5ee3c2f5c85c6713a1ba81afe2e1b4582dbc7554bf990d81173df003f" exitCode=0 Feb 18 16:45:01 crc kubenswrapper[4812]: I0218 16:45:01.502109 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" event={"ID":"04518de5-df9f-4e43-b939-89cbfc52a56a","Type":"ContainerDied","Data":"f4b92cf5ee3c2f5c85c6713a1ba81afe2e1b4582dbc7554bf990d81173df003f"} Feb 18 16:45:01 crc kubenswrapper[4812]: I0218 16:45:01.502150 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" event={"ID":"04518de5-df9f-4e43-b939-89cbfc52a56a","Type":"ContainerStarted","Data":"703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da"} Feb 18 16:45:03 crc kubenswrapper[4812]: I0218 16:45:03.417052 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:45:03 crc kubenswrapper[4812]: I0218 16:45:03.417447 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:45:03 crc kubenswrapper[4812]: I0218 16:45:03.417525 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:45:03 crc kubenswrapper[4812]: I0218 16:45:03.418282 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:45:03 crc kubenswrapper[4812]: I0218 16:45:03.418378 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3" gracePeriod=600 Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.533235 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3" exitCode=0 Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.533296 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3"} Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.533366 4812 scope.go:117] "RemoveContainer" containerID="3d7581d2e4c25fbed3ef5d75135c31adb6689621fea51307fde2e7105a8b0b60" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.535548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" event={"ID":"04518de5-df9f-4e43-b939-89cbfc52a56a","Type":"ContainerDied","Data":"703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da"} Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.535596 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703d0f6a54be5aa1f53b56fb0c7186d51b8e4dc2000d7a44363f4728f0edc7da" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.556540 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.679837 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4w8l\" (UniqueName: \"kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l\") pod \"04518de5-df9f-4e43-b939-89cbfc52a56a\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.679890 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume\") pod \"04518de5-df9f-4e43-b939-89cbfc52a56a\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.679924 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume\") pod \"04518de5-df9f-4e43-b939-89cbfc52a56a\" (UID: \"04518de5-df9f-4e43-b939-89cbfc52a56a\") " Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.686629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l" (OuterVolumeSpecName: "kube-api-access-x4w8l") pod "04518de5-df9f-4e43-b939-89cbfc52a56a" (UID: "04518de5-df9f-4e43-b939-89cbfc52a56a"). InnerVolumeSpecName "kube-api-access-x4w8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.687224 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume" (OuterVolumeSpecName: "config-volume") pod "04518de5-df9f-4e43-b939-89cbfc52a56a" (UID: "04518de5-df9f-4e43-b939-89cbfc52a56a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.689110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04518de5-df9f-4e43-b939-89cbfc52a56a" (UID: "04518de5-df9f-4e43-b939-89cbfc52a56a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.781647 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4w8l\" (UniqueName: \"kubernetes.io/projected/04518de5-df9f-4e43-b939-89cbfc52a56a-kube-api-access-x4w8l\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.782035 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04518de5-df9f-4e43-b939-89cbfc52a56a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:04 crc kubenswrapper[4812]: I0218 16:45:04.782052 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04518de5-df9f-4e43-b939-89cbfc52a56a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.545291 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369"} Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.547045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" event={"ID":"8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896","Type":"ContainerStarted","Data":"16d09642578378b5b16ec0d0041e63ffe440654e3bc4ef550fdae0fb5feed033"} Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.547145 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.548277 4812 generic.go:334] "Generic (PLEG): container finished" podID="0ac2deeb-2838-428e-b648-0f9ea2d0aed5" containerID="c366e24d3e2ecdf47f8e445e5868149f9b68503a11c5a014fb7b04e18a05a2cf" exitCode=0 Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.548327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerDied","Data":"c366e24d3e2ecdf47f8e445e5868149f9b68503a11c5a014fb7b04e18a05a2cf"} Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.548338 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs" Feb 18 16:45:05 crc kubenswrapper[4812]: I0218 16:45:05.630564 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" podStartSLOduration=2.310938311 podStartE2EDuration="9.630538142s" podCreationTimestamp="2026-02-18 16:44:56 +0000 UTC" firstStartedPulling="2026-02-18 16:44:57.326379575 +0000 UTC m=+917.591990484" lastFinishedPulling="2026-02-18 16:45:04.645979406 +0000 UTC m=+924.911590315" observedRunningTime="2026-02-18 16:45:05.629153648 +0000 UTC m=+925.894764567" watchObservedRunningTime="2026-02-18 16:45:05.630538142 +0000 UTC m=+925.896149051" Feb 18 16:45:06 crc kubenswrapper[4812]: I0218 16:45:06.564811 4812 generic.go:334] "Generic (PLEG): container finished" podID="0ac2deeb-2838-428e-b648-0f9ea2d0aed5" containerID="e72a0742191badb14c03fac964cc2deb179f31c61b8493194756e57c43f59fc8" exitCode=0 Feb 18 16:45:06 crc kubenswrapper[4812]: I0218 16:45:06.564895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerDied","Data":"e72a0742191badb14c03fac964cc2deb179f31c61b8493194756e57c43f59fc8"} Feb 18 16:45:07 crc kubenswrapper[4812]: I0218 16:45:07.574293 4812 generic.go:334] "Generic (PLEG): container finished" podID="0ac2deeb-2838-428e-b648-0f9ea2d0aed5" containerID="560b41742aa151b73dea1fa01bb18128ca5f9491df9ad326df6cad6c080fe69d" exitCode=0 Feb 18 16:45:07 crc kubenswrapper[4812]: I0218 16:45:07.574407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerDied","Data":"560b41742aa151b73dea1fa01bb18128ca5f9491df9ad326df6cad6c080fe69d"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.473262 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7ctsm" Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584274 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"b5f641db759da6ac45d1ffee510a747d7765d621804dfda048c9663809c48734"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"2f85294c092c9aee290cf843d2346380c7c4b74992b3fd675b09fbdb613a0ed3"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584337 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"a808c120b7bd4e1c71a60518c0e1f46472d3972fe62aff39cb53a60ebfea09e4"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"859855b3d5a428eef6932cfd2523c9d4e5b87f4391ef09e40b0477e4651836d4"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584356 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"c64449199f132b43551667071d0d4837f4f5b4a43b50694caf0d9ea83b442c31"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7w2pt" event={"ID":"0ac2deeb-2838-428e-b648-0f9ea2d0aed5","Type":"ContainerStarted","Data":"6fbbd4ca755d89794d4f8b48078df1d9c7eb94c390d6f71061df8b464fcee5fc"} Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.584662 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:45:08 crc kubenswrapper[4812]: I0218 16:45:08.607267 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7w2pt" podStartSLOduration=4.928668378 podStartE2EDuration="12.60723962s" podCreationTimestamp="2026-02-18 16:44:56 +0000 UTC" firstStartedPulling="2026-02-18 16:44:57.008801414 +0000 UTC m=+917.274412333" lastFinishedPulling="2026-02-18 16:45:04.687372666 +0000 UTC m=+924.952983575" observedRunningTime="2026-02-18 16:45:08.603762384 +0000 UTC m=+928.869373303" watchObservedRunningTime="2026-02-18 16:45:08.60723962 +0000 UTC m=+928.872850529" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.121876 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:11 crc kubenswrapper[4812]: E0218 16:45:11.122711 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04518de5-df9f-4e43-b939-89cbfc52a56a" containerName="collect-profiles" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.122726 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04518de5-df9f-4e43-b939-89cbfc52a56a" containerName="collect-profiles" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.122883 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04518de5-df9f-4e43-b939-89cbfc52a56a" containerName="collect-profiles" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.123446 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.127395 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mhqjn" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.127412 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.128382 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.148557 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.183047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttb6\" (UniqueName: \"kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6\") pod \"openstack-operator-index-wg4zx\" (UID: \"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646\") " pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.284373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttb6\" (UniqueName: \"kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6\") pod \"openstack-operator-index-wg4zx\" (UID: \"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646\") " pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.307361 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttb6\" (UniqueName: \"kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6\") pod \"openstack-operator-index-wg4zx\" (UID: \"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646\") " pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.443448 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.631958 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:11 crc kubenswrapper[4812]: W0218 16:45:11.638387 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc1eb7fb_2ff1_4186_a4b0_bcba2b0c1646.slice/crio-71e98d1ca06862ce99608a1aa0d3f025438b39d94f711ad9011eadf06f3d03c6 WatchSource:0}: Error finding container 71e98d1ca06862ce99608a1aa0d3f025438b39d94f711ad9011eadf06f3d03c6: Status 404 returned error can't find the container with id 71e98d1ca06862ce99608a1aa0d3f025438b39d94f711ad9011eadf06f3d03c6 Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.857586 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:45:11 crc kubenswrapper[4812]: I0218 16:45:11.894675 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:45:12 crc kubenswrapper[4812]: I0218 16:45:12.631133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wg4zx" event={"ID":"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646","Type":"ContainerStarted","Data":"71e98d1ca06862ce99608a1aa0d3f025438b39d94f711ad9011eadf06f3d03c6"} Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.293317 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.649843 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wg4zx" event={"ID":"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646","Type":"ContainerStarted","Data":"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3"} Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.667809 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wg4zx" podStartSLOduration=1.472706242 podStartE2EDuration="3.667782156s" podCreationTimestamp="2026-02-18 16:45:11 +0000 UTC" firstStartedPulling="2026-02-18 16:45:11.641330682 +0000 UTC m=+931.906941591" lastFinishedPulling="2026-02-18 16:45:13.836406596 +0000 UTC m=+934.102017505" observedRunningTime="2026-02-18 16:45:14.666410282 +0000 UTC m=+934.932021221" watchObservedRunningTime="2026-02-18 16:45:14.667782156 +0000 UTC m=+934.933393075" Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.905213 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ph7db"] Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.906306 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:14 crc kubenswrapper[4812]: I0218 16:45:14.911755 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ph7db"] Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.039533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sj86\" (UniqueName: \"kubernetes.io/projected/1d7285f1-1635-4793-9ed6-1eff7ac4153b-kube-api-access-2sj86\") pod \"openstack-operator-index-ph7db\" (UID: \"1d7285f1-1635-4793-9ed6-1eff7ac4153b\") " pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.140987 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sj86\" (UniqueName: \"kubernetes.io/projected/1d7285f1-1635-4793-9ed6-1eff7ac4153b-kube-api-access-2sj86\") pod \"openstack-operator-index-ph7db\" (UID: \"1d7285f1-1635-4793-9ed6-1eff7ac4153b\") " pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.175365 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sj86\" (UniqueName: \"kubernetes.io/projected/1d7285f1-1635-4793-9ed6-1eff7ac4153b-kube-api-access-2sj86\") pod \"openstack-operator-index-ph7db\" (UID: \"1d7285f1-1635-4793-9ed6-1eff7ac4153b\") " pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.235031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.495641 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ph7db"] Feb 18 16:45:15 crc kubenswrapper[4812]: W0218 16:45:15.505651 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7285f1_1635_4793_9ed6_1eff7ac4153b.slice/crio-c924681520e62ea0e80c0cf6682a019dac493e9a020187a3b81e7a273e222be6 WatchSource:0}: Error finding container c924681520e62ea0e80c0cf6682a019dac493e9a020187a3b81e7a273e222be6: Status 404 returned error can't find the container with id c924681520e62ea0e80c0cf6682a019dac493e9a020187a3b81e7a273e222be6 Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.659112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ph7db" event={"ID":"1d7285f1-1635-4793-9ed6-1eff7ac4153b","Type":"ContainerStarted","Data":"c924681520e62ea0e80c0cf6682a019dac493e9a020187a3b81e7a273e222be6"} Feb 18 16:45:15 crc kubenswrapper[4812]: I0218 16:45:15.659298 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wg4zx" podUID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" containerName="registry-server" containerID="cri-o://43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3" gracePeriod=2 Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.006203 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.154732 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttb6\" (UniqueName: \"kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6\") pod \"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646\" (UID: \"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646\") " Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.162301 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6" (OuterVolumeSpecName: "kube-api-access-bttb6") pod "dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" (UID: "dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646"). InnerVolumeSpecName "kube-api-access-bttb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.257979 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttb6\" (UniqueName: \"kubernetes.io/projected/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646-kube-api-access-bttb6\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.673975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ph7db" event={"ID":"1d7285f1-1635-4793-9ed6-1eff7ac4153b","Type":"ContainerStarted","Data":"7ad78206dce4f10b349170f52d184728cb9b0c2b1111274f004db67d0b432aeb"} Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.677659 4812 generic.go:334] "Generic (PLEG): container finished" podID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" containerID="43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3" exitCode=0 Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.677733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wg4zx" event={"ID":"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646","Type":"ContainerDied","Data":"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3"} Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.677779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wg4zx" event={"ID":"dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646","Type":"ContainerDied","Data":"71e98d1ca06862ce99608a1aa0d3f025438b39d94f711ad9011eadf06f3d03c6"} Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.677824 4812 scope.go:117] "RemoveContainer" containerID="43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.678176 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wg4zx" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.700338 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-8dwvz" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.719789 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ph7db" podStartSLOduration=2.6274567060000003 podStartE2EDuration="2.719756342s" podCreationTimestamp="2026-02-18 16:45:14 +0000 UTC" firstStartedPulling="2026-02-18 16:45:15.510913415 +0000 UTC m=+935.776524324" lastFinishedPulling="2026-02-18 16:45:15.603213051 +0000 UTC m=+935.868823960" observedRunningTime="2026-02-18 16:45:16.698470277 +0000 UTC m=+936.964081186" watchObservedRunningTime="2026-02-18 16:45:16.719756342 +0000 UTC m=+936.985367251" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.727238 4812 scope.go:117] "RemoveContainer" containerID="43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3" Feb 18 16:45:16 crc kubenswrapper[4812]: E0218 16:45:16.727915 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3\": container with ID starting with 43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3 not found: ID does not exist" containerID="43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.727971 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3"} err="failed to get container status \"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3\": rpc error: code = NotFound desc = could not find container \"43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3\": container with ID starting with 43c34e656c64c7ed159d98018cdf81e588bf6ae33158776ea1c00165b6ae89e3 not found: ID does not exist" Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.728460 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.736118 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-wg4zx"] Feb 18 16:45:16 crc kubenswrapper[4812]: I0218 16:45:16.872993 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-n67c7" Feb 18 16:45:18 crc kubenswrapper[4812]: I0218 16:45:18.516357 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" path="/var/lib/kubelet/pods/dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646/volumes" Feb 18 16:45:25 crc kubenswrapper[4812]: I0218 16:45:25.235488 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:25 crc kubenswrapper[4812]: I0218 16:45:25.236191 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:25 crc kubenswrapper[4812]: I0218 16:45:25.264612 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:25 crc kubenswrapper[4812]: I0218 16:45:25.817869 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ph7db" Feb 18 16:45:26 crc kubenswrapper[4812]: I0218 16:45:26.860432 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7w2pt" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.214714 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp"] Feb 18 16:45:32 crc kubenswrapper[4812]: E0218 16:45:32.215964 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" containerName="registry-server" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.215979 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" containerName="registry-server" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.216172 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1eb7fb-2ff1-4186-a4b0-bcba2b0c1646" containerName="registry-server" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.217214 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.220357 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xpz5w" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.237754 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp"] Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.322581 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrlm\" (UniqueName: \"kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.322662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.322787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.424631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.424755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrlm\" (UniqueName: \"kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.424807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.425309 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.425323 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.449220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrlm\" (UniqueName: \"kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm\") pod \"0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.535365 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.755602 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp"] Feb 18 16:45:32 crc kubenswrapper[4812]: I0218 16:45:32.852132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" event={"ID":"34d55cc3-88ce-4b05-8837-9a3f25cd4570","Type":"ContainerStarted","Data":"c60ec8f067c0abf27ec10b67818758b0f106c49c3171c1168e79b6df0e1b61c6"} Feb 18 16:45:33 crc kubenswrapper[4812]: I0218 16:45:33.864880 4812 generic.go:334] "Generic (PLEG): container finished" podID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerID="97b16f01257c8a0932da4970d04b1dab8256cd4f6194f6c6e0b81bbb47cd3aa7" exitCode=0 Feb 18 16:45:33 crc kubenswrapper[4812]: I0218 16:45:33.864981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" event={"ID":"34d55cc3-88ce-4b05-8837-9a3f25cd4570","Type":"ContainerDied","Data":"97b16f01257c8a0932da4970d04b1dab8256cd4f6194f6c6e0b81bbb47cd3aa7"} Feb 18 16:45:34 crc kubenswrapper[4812]: I0218 16:45:34.879913 4812 generic.go:334] "Generic (PLEG): container finished" podID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerID="17fff3d8403508515d7a21e12e99b5e5f8b4657b2b165bde0048c9826b012eb8" exitCode=0 Feb 18 16:45:34 crc kubenswrapper[4812]: I0218 16:45:34.880146 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" event={"ID":"34d55cc3-88ce-4b05-8837-9a3f25cd4570","Type":"ContainerDied","Data":"17fff3d8403508515d7a21e12e99b5e5f8b4657b2b165bde0048c9826b012eb8"} Feb 18 16:45:35 crc kubenswrapper[4812]: I0218 16:45:35.891832 4812 generic.go:334] "Generic (PLEG): container finished" podID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerID="adb65b505bf3113255d559d71b0a0bbcf02a2b8bbc70a4ddb790524366226cc9" exitCode=0 Feb 18 16:45:35 crc kubenswrapper[4812]: I0218 16:45:35.892014 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" event={"ID":"34d55cc3-88ce-4b05-8837-9a3f25cd4570","Type":"ContainerDied","Data":"adb65b505bf3113255d559d71b0a0bbcf02a2b8bbc70a4ddb790524366226cc9"} Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.151936 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.306607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle\") pod \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.306677 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util\") pod \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.306717 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgrlm\" (UniqueName: \"kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm\") pod \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\" (UID: \"34d55cc3-88ce-4b05-8837-9a3f25cd4570\") " Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.307613 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle" (OuterVolumeSpecName: "bundle") pod "34d55cc3-88ce-4b05-8837-9a3f25cd4570" (UID: "34d55cc3-88ce-4b05-8837-9a3f25cd4570"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.314288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm" (OuterVolumeSpecName: "kube-api-access-mgrlm") pod "34d55cc3-88ce-4b05-8837-9a3f25cd4570" (UID: "34d55cc3-88ce-4b05-8837-9a3f25cd4570"). InnerVolumeSpecName "kube-api-access-mgrlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.321599 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util" (OuterVolumeSpecName: "util") pod "34d55cc3-88ce-4b05-8837-9a3f25cd4570" (UID: "34d55cc3-88ce-4b05-8837-9a3f25cd4570"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.408937 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgrlm\" (UniqueName: \"kubernetes.io/projected/34d55cc3-88ce-4b05-8837-9a3f25cd4570-kube-api-access-mgrlm\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.409007 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.409025 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d55cc3-88ce-4b05-8837-9a3f25cd4570-util\") on node \"crc\" DevicePath \"\"" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.910726 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" event={"ID":"34d55cc3-88ce-4b05-8837-9a3f25cd4570","Type":"ContainerDied","Data":"c60ec8f067c0abf27ec10b67818758b0f106c49c3171c1168e79b6df0e1b61c6"} Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.910785 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60ec8f067c0abf27ec10b67818758b0f106c49c3171c1168e79b6df0e1b61c6" Feb 18 16:45:37 crc kubenswrapper[4812]: I0218 16:45:37.910897 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.405742 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7"] Feb 18 16:45:44 crc kubenswrapper[4812]: E0218 16:45:44.406607 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="util" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.406622 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="util" Feb 18 16:45:44 crc kubenswrapper[4812]: E0218 16:45:44.406640 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="extract" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.406646 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="extract" Feb 18 16:45:44 crc kubenswrapper[4812]: E0218 16:45:44.406653 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="pull" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.406662 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="pull" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.406770 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d55cc3-88ce-4b05-8837-9a3f25cd4570" containerName="extract" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.407267 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.409734 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9z44t" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.430048 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7"] Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.517453 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6w2z\" (UniqueName: \"kubernetes.io/projected/7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb-kube-api-access-k6w2z\") pod \"openstack-operator-controller-init-7488c4c4f-csxg7\" (UID: \"7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb\") " pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.618890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6w2z\" (UniqueName: \"kubernetes.io/projected/7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb-kube-api-access-k6w2z\") pod \"openstack-operator-controller-init-7488c4c4f-csxg7\" (UID: \"7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb\") " pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.636629 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6w2z\" (UniqueName: \"kubernetes.io/projected/7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb-kube-api-access-k6w2z\") pod \"openstack-operator-controller-init-7488c4c4f-csxg7\" (UID: \"7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb\") " pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.726285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:44 crc kubenswrapper[4812]: I0218 16:45:44.954148 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7"] Feb 18 16:45:44 crc kubenswrapper[4812]: W0218 16:45:44.958969 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7247ef5f_7aa8_4b7c_a9bd_20e50002b7cb.slice/crio-f941822b0b8036b4c4a16a3d39d3ff1192bb6d02bc517b374e6fb576d62cb8b7 WatchSource:0}: Error finding container f941822b0b8036b4c4a16a3d39d3ff1192bb6d02bc517b374e6fb576d62cb8b7: Status 404 returned error can't find the container with id f941822b0b8036b4c4a16a3d39d3ff1192bb6d02bc517b374e6fb576d62cb8b7 Feb 18 16:45:45 crc kubenswrapper[4812]: I0218 16:45:45.965038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" event={"ID":"7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb","Type":"ContainerStarted","Data":"f941822b0b8036b4c4a16a3d39d3ff1192bb6d02bc517b374e6fb576d62cb8b7"} Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.484122 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.486156 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.511549 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.630629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.630716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.630853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwn9\" (UniqueName: \"kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.732484 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwn9\" (UniqueName: \"kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.732589 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.732625 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.733165 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.733387 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.763580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwn9\" (UniqueName: \"kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9\") pod \"redhat-marketplace-drmkn\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:49 crc kubenswrapper[4812]: I0218 16:45:49.809033 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:51 crc kubenswrapper[4812]: I0218 16:45:51.240974 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:45:51 crc kubenswrapper[4812]: W0218 16:45:51.247507 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cf60070_65e7_4954_8db6_7f28741c7589.slice/crio-5787114df463065582270b9c8c792628f8174df6e2076470822a1ef59cd94a25 WatchSource:0}: Error finding container 5787114df463065582270b9c8c792628f8174df6e2076470822a1ef59cd94a25: Status 404 returned error can't find the container with id 5787114df463065582270b9c8c792628f8174df6e2076470822a1ef59cd94a25 Feb 18 16:45:52 crc kubenswrapper[4812]: I0218 16:45:52.029922 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cf60070-65e7-4954-8db6-7f28741c7589" containerID="3a086ccaf9b991230486edb8e73209daf6e980e28a915310f4efc34479148803" exitCode=0 Feb 18 16:45:52 crc kubenswrapper[4812]: I0218 16:45:52.030055 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerDied","Data":"3a086ccaf9b991230486edb8e73209daf6e980e28a915310f4efc34479148803"} Feb 18 16:45:52 crc kubenswrapper[4812]: I0218 16:45:52.030625 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerStarted","Data":"5787114df463065582270b9c8c792628f8174df6e2076470822a1ef59cd94a25"} Feb 18 16:45:52 crc kubenswrapper[4812]: I0218 16:45:52.036560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" event={"ID":"7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb","Type":"ContainerStarted","Data":"96d065504012bdb5bab12b52cd0ee10c19fa9e5e85dbca98e9321bcf42fa02ff"} Feb 18 16:45:52 crc kubenswrapper[4812]: I0218 16:45:52.036746 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:45:53 crc kubenswrapper[4812]: I0218 16:45:53.045170 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cf60070-65e7-4954-8db6-7f28741c7589" containerID="c860c42969260297716fece5b862452fb32e277a0b7e5fd0a8d0ee8edf333551" exitCode=0 Feb 18 16:45:53 crc kubenswrapper[4812]: I0218 16:45:53.045284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerDied","Data":"c860c42969260297716fece5b862452fb32e277a0b7e5fd0a8d0ee8edf333551"} Feb 18 16:45:53 crc kubenswrapper[4812]: I0218 16:45:53.079226 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" podStartSLOduration=3.194594656 podStartE2EDuration="9.079205895s" podCreationTimestamp="2026-02-18 16:45:44 +0000 UTC" firstStartedPulling="2026-02-18 16:45:44.961594737 +0000 UTC m=+965.227205646" lastFinishedPulling="2026-02-18 16:45:50.846205976 +0000 UTC m=+971.111816885" observedRunningTime="2026-02-18 16:45:52.110496929 +0000 UTC m=+972.376107858" watchObservedRunningTime="2026-02-18 16:45:53.079205895 +0000 UTC m=+973.344816804" Feb 18 16:45:54 crc kubenswrapper[4812]: I0218 16:45:54.053491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerStarted","Data":"97a8d1a0a78f89b1b5466dd0b793970c0be9fcaed551008b3ea72a6ab922a275"} Feb 18 16:45:54 crc kubenswrapper[4812]: I0218 16:45:54.075524 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-drmkn" podStartSLOduration=3.621597692 podStartE2EDuration="5.075497261s" podCreationTimestamp="2026-02-18 16:45:49 +0000 UTC" firstStartedPulling="2026-02-18 16:45:52.032285911 +0000 UTC m=+972.297896820" lastFinishedPulling="2026-02-18 16:45:53.48618548 +0000 UTC m=+973.751796389" observedRunningTime="2026-02-18 16:45:54.074993239 +0000 UTC m=+974.340604148" watchObservedRunningTime="2026-02-18 16:45:54.075497261 +0000 UTC m=+974.341108170" Feb 18 16:45:59 crc kubenswrapper[4812]: I0218 16:45:59.809988 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:59 crc kubenswrapper[4812]: I0218 16:45:59.810793 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:45:59 crc kubenswrapper[4812]: I0218 16:45:59.890618 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:46:00 crc kubenswrapper[4812]: I0218 16:46:00.157289 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:46:02 crc kubenswrapper[4812]: I0218 16:46:02.277273 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:46:02 crc kubenswrapper[4812]: I0218 16:46:02.277715 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-drmkn" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="registry-server" containerID="cri-o://97a8d1a0a78f89b1b5466dd0b793970c0be9fcaed551008b3ea72a6ab922a275" gracePeriod=2 Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.124265 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cf60070-65e7-4954-8db6-7f28741c7589" containerID="97a8d1a0a78f89b1b5466dd0b793970c0be9fcaed551008b3ea72a6ab922a275" exitCode=0 Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.124800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerDied","Data":"97a8d1a0a78f89b1b5466dd0b793970c0be9fcaed551008b3ea72a6ab922a275"} Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.331133 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.467405 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content\") pod \"8cf60070-65e7-4954-8db6-7f28741c7589\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.467618 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities\") pod \"8cf60070-65e7-4954-8db6-7f28741c7589\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.467672 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwn9\" (UniqueName: \"kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9\") pod \"8cf60070-65e7-4954-8db6-7f28741c7589\" (UID: \"8cf60070-65e7-4954-8db6-7f28741c7589\") " Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.469217 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities" (OuterVolumeSpecName: "utilities") pod "8cf60070-65e7-4954-8db6-7f28741c7589" (UID: "8cf60070-65e7-4954-8db6-7f28741c7589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.475410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9" (OuterVolumeSpecName: "kube-api-access-rrwn9") pod "8cf60070-65e7-4954-8db6-7f28741c7589" (UID: "8cf60070-65e7-4954-8db6-7f28741c7589"). InnerVolumeSpecName "kube-api-access-rrwn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.569375 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:03 crc kubenswrapper[4812]: I0218 16:46:03.569405 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwn9\" (UniqueName: \"kubernetes.io/projected/8cf60070-65e7-4954-8db6-7f28741c7589-kube-api-access-rrwn9\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.124921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cf60070-65e7-4954-8db6-7f28741c7589" (UID: "8cf60070-65e7-4954-8db6-7f28741c7589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.140837 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-drmkn" event={"ID":"8cf60070-65e7-4954-8db6-7f28741c7589","Type":"ContainerDied","Data":"5787114df463065582270b9c8c792628f8174df6e2076470822a1ef59cd94a25"} Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.140961 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-drmkn" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.140970 4812 scope.go:117] "RemoveContainer" containerID="97a8d1a0a78f89b1b5466dd0b793970c0be9fcaed551008b3ea72a6ab922a275" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.184757 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cf60070-65e7-4954-8db6-7f28741c7589-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.197273 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.203343 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-drmkn"] Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.206185 4812 scope.go:117] "RemoveContainer" containerID="c860c42969260297716fece5b862452fb32e277a0b7e5fd0a8d0ee8edf333551" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.230220 4812 scope.go:117] "RemoveContainer" containerID="3a086ccaf9b991230486edb8e73209daf6e980e28a915310f4efc34479148803" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.524589 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" path="/var/lib/kubelet/pods/8cf60070-65e7-4954-8db6-7f28741c7589/volumes" Feb 18 16:46:04 crc kubenswrapper[4812]: I0218 16:46:04.732861 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7488c4c4f-csxg7" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.276787 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:06 crc kubenswrapper[4812]: E0218 16:46:06.277319 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="extract-utilities" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.277340 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="extract-utilities" Feb 18 16:46:06 crc kubenswrapper[4812]: E0218 16:46:06.277377 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="extract-content" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.277386 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="extract-content" Feb 18 16:46:06 crc kubenswrapper[4812]: E0218 16:46:06.277401 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="registry-server" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.277410 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="registry-server" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.277583 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf60070-65e7-4954-8db6-7f28741c7589" containerName="registry-server" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.278718 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.304708 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.423315 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.423398 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd5w\" (UniqueName: \"kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.423646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.525286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd5w\" (UniqueName: \"kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.525381 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.525437 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.525974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.525986 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.549955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd5w\" (UniqueName: \"kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w\") pod \"certified-operators-4frdr\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:06 crc kubenswrapper[4812]: I0218 16:46:06.598724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:07 crc kubenswrapper[4812]: I0218 16:46:07.092029 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:07 crc kubenswrapper[4812]: I0218 16:46:07.166728 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerStarted","Data":"8a13f513e6b2b6e296614ddb51408cf5fa739006e6ab24db51dea65ba19aa944"} Feb 18 16:46:08 crc kubenswrapper[4812]: I0218 16:46:08.181935 4812 generic.go:334] "Generic (PLEG): container finished" podID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerID="e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4" exitCode=0 Feb 18 16:46:08 crc kubenswrapper[4812]: I0218 16:46:08.182343 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerDied","Data":"e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4"} Feb 18 16:46:10 crc kubenswrapper[4812]: I0218 16:46:10.426616 4812 generic.go:334] "Generic (PLEG): container finished" podID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerID="2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde" exitCode=0 Feb 18 16:46:10 crc kubenswrapper[4812]: I0218 16:46:10.426744 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerDied","Data":"2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde"} Feb 18 16:46:11 crc kubenswrapper[4812]: I0218 16:46:11.437818 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerStarted","Data":"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb"} Feb 18 16:46:11 crc kubenswrapper[4812]: I0218 16:46:11.457748 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4frdr" podStartSLOduration=2.803435612 podStartE2EDuration="5.45772882s" podCreationTimestamp="2026-02-18 16:46:06 +0000 UTC" firstStartedPulling="2026-02-18 16:46:08.184303876 +0000 UTC m=+988.449914785" lastFinishedPulling="2026-02-18 16:46:10.838597054 +0000 UTC m=+991.104207993" observedRunningTime="2026-02-18 16:46:11.455822193 +0000 UTC m=+991.721433102" watchObservedRunningTime="2026-02-18 16:46:11.45772882 +0000 UTC m=+991.723339729" Feb 18 16:46:16 crc kubenswrapper[4812]: I0218 16:46:16.599714 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:16 crc kubenswrapper[4812]: I0218 16:46:16.600781 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:16 crc kubenswrapper[4812]: I0218 16:46:16.652012 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:17 crc kubenswrapper[4812]: I0218 16:46:17.525384 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:19 crc kubenswrapper[4812]: I0218 16:46:19.073185 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:19 crc kubenswrapper[4812]: I0218 16:46:19.500660 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4frdr" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="registry-server" containerID="cri-o://fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb" gracePeriod=2 Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.485637 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.537707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content\") pod \"44f78742-8ae1-40b0-8c2c-d695f58a91de\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.537865 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntd5w\" (UniqueName: \"kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w\") pod \"44f78742-8ae1-40b0-8c2c-d695f58a91de\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.537908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities\") pod \"44f78742-8ae1-40b0-8c2c-d695f58a91de\" (UID: \"44f78742-8ae1-40b0-8c2c-d695f58a91de\") " Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.539179 4812 generic.go:334] "Generic (PLEG): container finished" podID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerID="fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb" exitCode=0 Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.539288 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4frdr" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.539951 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities" (OuterVolumeSpecName: "utilities") pod "44f78742-8ae1-40b0-8c2c-d695f58a91de" (UID: "44f78742-8ae1-40b0-8c2c-d695f58a91de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.545898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerDied","Data":"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb"} Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.545952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4frdr" event={"ID":"44f78742-8ae1-40b0-8c2c-d695f58a91de","Type":"ContainerDied","Data":"8a13f513e6b2b6e296614ddb51408cf5fa739006e6ab24db51dea65ba19aa944"} Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.545976 4812 scope.go:117] "RemoveContainer" containerID="fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.547315 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w" (OuterVolumeSpecName: "kube-api-access-ntd5w") pod "44f78742-8ae1-40b0-8c2c-d695f58a91de" (UID: "44f78742-8ae1-40b0-8c2c-d695f58a91de"). InnerVolumeSpecName "kube-api-access-ntd5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.635967 4812 scope.go:117] "RemoveContainer" containerID="2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.639328 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntd5w\" (UniqueName: \"kubernetes.io/projected/44f78742-8ae1-40b0-8c2c-d695f58a91de-kube-api-access-ntd5w\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.639367 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.661219 4812 scope.go:117] "RemoveContainer" containerID="e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.683937 4812 scope.go:117] "RemoveContainer" containerID="fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb" Feb 18 16:46:20 crc kubenswrapper[4812]: E0218 16:46:20.684944 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb\": container with ID starting with fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb not found: ID does not exist" containerID="fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.684976 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb"} err="failed to get container status \"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb\": rpc error: code = NotFound desc = could not find container \"fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb\": container with ID starting with fac92ec047023d0f635f33f47e3d389ddafb3b9643e0795ffa4b409d7c4330fb not found: ID does not exist" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.684998 4812 scope.go:117] "RemoveContainer" containerID="2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde" Feb 18 16:46:20 crc kubenswrapper[4812]: E0218 16:46:20.685385 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde\": container with ID starting with 2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde not found: ID does not exist" containerID="2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.685420 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde"} err="failed to get container status \"2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde\": rpc error: code = NotFound desc = could not find container \"2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde\": container with ID starting with 2dc5914234cdef0e822191fd3f3f966eb68bbb759d89629626c1f609712ecfde not found: ID does not exist" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.685442 4812 scope.go:117] "RemoveContainer" containerID="e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4" Feb 18 16:46:20 crc kubenswrapper[4812]: E0218 16:46:20.686036 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4\": container with ID starting with e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4 not found: ID does not exist" containerID="e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4" Feb 18 16:46:20 crc kubenswrapper[4812]: I0218 16:46:20.686114 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4"} err="failed to get container status \"e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4\": rpc error: code = NotFound desc = could not find container \"e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4\": container with ID starting with e0fe29e98d009ceaa6aeb8eb4d619cd59b2503e771cc8090378a30fca5cadbe4 not found: ID does not exist" Feb 18 16:46:21 crc kubenswrapper[4812]: I0218 16:46:21.087297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44f78742-8ae1-40b0-8c2c-d695f58a91de" (UID: "44f78742-8ae1-40b0-8c2c-d695f58a91de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:46:21 crc kubenswrapper[4812]: I0218 16:46:21.146208 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44f78742-8ae1-40b0-8c2c-d695f58a91de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:46:21 crc kubenswrapper[4812]: I0218 16:46:21.174964 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:21 crc kubenswrapper[4812]: I0218 16:46:21.183295 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4frdr"] Feb 18 16:46:22 crc kubenswrapper[4812]: I0218 16:46:22.518011 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" path="/var/lib/kubelet/pods/44f78742-8ae1-40b0-8c2c-d695f58a91de/volumes" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.333740 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l"] Feb 18 16:46:33 crc kubenswrapper[4812]: E0218 16:46:33.334560 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="registry-server" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.334576 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="registry-server" Feb 18 16:46:33 crc kubenswrapper[4812]: E0218 16:46:33.334602 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="extract-content" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.334608 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="extract-content" Feb 18 16:46:33 crc kubenswrapper[4812]: E0218 16:46:33.334620 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="extract-utilities" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.334626 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="extract-utilities" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.334764 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f78742-8ae1-40b0-8c2c-d695f58a91de" containerName="registry-server" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.335239 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.342074 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-w6tfr" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.342464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjwm\" (UniqueName: \"kubernetes.io/projected/aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa-kube-api-access-ptjwm\") pod \"barbican-operator-controller-manager-868647ff47-rns6l\" (UID: \"aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.356688 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.358153 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.361819 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-l9k65" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.371402 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.376435 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.383569 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.384584 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.389552 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zcnj7" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.412614 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.413907 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.428479 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jzphf" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.447474 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjz2\" (UniqueName: \"kubernetes.io/projected/196a8044-f16c-465d-a1e4-e1e6703bf050-kube-api-access-ldjz2\") pod \"glance-operator-controller-manager-77987464f4-q4s5n\" (UID: \"196a8044-f16c-465d-a1e4-e1e6703bf050\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.447583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjwm\" (UniqueName: \"kubernetes.io/projected/aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa-kube-api-access-ptjwm\") pod \"barbican-operator-controller-manager-868647ff47-rns6l\" (UID: \"aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.447635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlnk\" (UniqueName: \"kubernetes.io/projected/78fc7ff4-fa73-4323-9756-db5902a66158-kube-api-access-zvlnk\") pod \"cinder-operator-controller-manager-5d946d989d-zw426\" (UID: \"78fc7ff4-fa73-4323-9756-db5902a66158\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.447686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4dln\" (UniqueName: \"kubernetes.io/projected/0f46711f-425e-4dbb-8a5d-ed6084adfde8-kube-api-access-h4dln\") pod \"designate-operator-controller-manager-6d8bf5c495-wq29b\" (UID: \"0f46711f-425e-4dbb-8a5d-ed6084adfde8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.451705 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.452868 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.455064 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t8p49" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.455889 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.483857 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.484608 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjwm\" (UniqueName: \"kubernetes.io/projected/aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa-kube-api-access-ptjwm\") pod \"barbican-operator-controller-manager-868647ff47-rns6l\" (UID: \"aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.492803 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.493781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.508067 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.508773 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tcjpl" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.520550 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.521415 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.528494 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d72n2" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlnk\" (UniqueName: \"kubernetes.io/projected/78fc7ff4-fa73-4323-9756-db5902a66158-kube-api-access-zvlnk\") pod \"cinder-operator-controller-manager-5d946d989d-zw426\" (UID: \"78fc7ff4-fa73-4323-9756-db5902a66158\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548659 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4dln\" (UniqueName: \"kubernetes.io/projected/0f46711f-425e-4dbb-8a5d-ed6084adfde8-kube-api-access-h4dln\") pod \"designate-operator-controller-manager-6d8bf5c495-wq29b\" (UID: \"0f46711f-425e-4dbb-8a5d-ed6084adfde8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548700 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wskf\" (UniqueName: \"kubernetes.io/projected/c5c31acb-2c1f-4923-833a-68de35fb9d54-kube-api-access-4wskf\") pod \"heat-operator-controller-manager-69f49c598c-qfk95\" (UID: \"c5c31acb-2c1f-4923-833a-68de35fb9d54\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjz2\" (UniqueName: \"kubernetes.io/projected/196a8044-f16c-465d-a1e4-e1e6703bf050-kube-api-access-ldjz2\") pod \"glance-operator-controller-manager-77987464f4-q4s5n\" (UID: \"196a8044-f16c-465d-a1e4-e1e6703bf050\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548770 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548805 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5jf\" (UniqueName: \"kubernetes.io/projected/08ea33ce-0d14-439c-9e63-f06d21d6907a-kube-api-access-7h5jf\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.548844 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6vv\" (UniqueName: \"kubernetes.io/projected/ab3c44e4-8127-4e37-a4be-44e1b85ef218-kube-api-access-cw6vv\") pod \"horizon-operator-controller-manager-5b9b8895d5-dtgqt\" (UID: \"ab3c44e4-8127-4e37-a4be-44e1b85ef218\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.558047 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.574477 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjz2\" (UniqueName: \"kubernetes.io/projected/196a8044-f16c-465d-a1e4-e1e6703bf050-kube-api-access-ldjz2\") pod \"glance-operator-controller-manager-77987464f4-q4s5n\" (UID: \"196a8044-f16c-465d-a1e4-e1e6703bf050\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.576217 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.585424 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlnk\" (UniqueName: \"kubernetes.io/projected/78fc7ff4-fa73-4323-9756-db5902a66158-kube-api-access-zvlnk\") pod \"cinder-operator-controller-manager-5d946d989d-zw426\" (UID: \"78fc7ff4-fa73-4323-9756-db5902a66158\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.587282 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.597633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4dln\" (UniqueName: \"kubernetes.io/projected/0f46711f-425e-4dbb-8a5d-ed6084adfde8-kube-api-access-h4dln\") pod \"designate-operator-controller-manager-6d8bf5c495-wq29b\" (UID: \"0f46711f-425e-4dbb-8a5d-ed6084adfde8\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.598533 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.609442 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5gqt2" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.635203 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.655253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wskf\" (UniqueName: \"kubernetes.io/projected/c5c31acb-2c1f-4923-833a-68de35fb9d54-kube-api-access-4wskf\") pod \"heat-operator-controller-manager-69f49c598c-qfk95\" (UID: \"c5c31acb-2c1f-4923-833a-68de35fb9d54\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.655388 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.655445 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5jf\" (UniqueName: \"kubernetes.io/projected/08ea33ce-0d14-439c-9e63-f06d21d6907a-kube-api-access-7h5jf\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.655502 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfdj\" (UniqueName: \"kubernetes.io/projected/1def9ee0-6aa7-4cc0-a709-a66e4c952d03-kube-api-access-ldfdj\") pod \"ironic-operator-controller-manager-554564d7fc-fgsbn\" (UID: \"1def9ee0-6aa7-4cc0-a709-a66e4c952d03\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.655535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6vv\" (UniqueName: \"kubernetes.io/projected/ab3c44e4-8127-4e37-a4be-44e1b85ef218-kube-api-access-cw6vv\") pod \"horizon-operator-controller-manager-5b9b8895d5-dtgqt\" (UID: \"ab3c44e4-8127-4e37-a4be-44e1b85ef218\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:46:33 crc kubenswrapper[4812]: E0218 16:46:33.656168 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:33 crc kubenswrapper[4812]: E0218 16:46:33.656239 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:46:34.156221633 +0000 UTC m=+1014.421832542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.675909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.695549 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.700299 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5jf\" (UniqueName: \"kubernetes.io/projected/08ea33ce-0d14-439c-9e63-f06d21d6907a-kube-api-access-7h5jf\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.700404 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6vv\" (UniqueName: \"kubernetes.io/projected/ab3c44e4-8127-4e37-a4be-44e1b85ef218-kube-api-access-cw6vv\") pod \"horizon-operator-controller-manager-5b9b8895d5-dtgqt\" (UID: \"ab3c44e4-8127-4e37-a4be-44e1b85ef218\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.701251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wskf\" (UniqueName: \"kubernetes.io/projected/c5c31acb-2c1f-4923-833a-68de35fb9d54-kube-api-access-4wskf\") pod \"heat-operator-controller-manager-69f49c598c-qfk95\" (UID: \"c5c31acb-2c1f-4923-833a-68de35fb9d54\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.705219 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.737785 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.757269 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.776337 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfdj\" (UniqueName: \"kubernetes.io/projected/1def9ee0-6aa7-4cc0-a709-a66e4c952d03-kube-api-access-ldfdj\") pod \"ironic-operator-controller-manager-554564d7fc-fgsbn\" (UID: \"1def9ee0-6aa7-4cc0-a709-a66e4c952d03\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.783235 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.784556 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.785913 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.802344 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.803549 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.803713 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8qnq4" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.805954 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v9wdq" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.812171 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.815345 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.815828 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfdj\" (UniqueName: \"kubernetes.io/projected/1def9ee0-6aa7-4cc0-a709-a66e4c952d03-kube-api-access-ldfdj\") pod \"ironic-operator-controller-manager-554564d7fc-fgsbn\" (UID: \"1def9ee0-6aa7-4cc0-a709-a66e4c952d03\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.816494 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.839255 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lkr5s" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.864583 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.875856 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.879617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmc9c\" (UniqueName: \"kubernetes.io/projected/8717609a-7f7e-4de2-b0ec-93cc0539c922-kube-api-access-dmc9c\") pod \"keystone-operator-controller-manager-b4d948c87-57f4l\" (UID: \"8717609a-7f7e-4de2-b0ec-93cc0539c922\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.879683 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmnsg\" (UniqueName: \"kubernetes.io/projected/fab34061-20c2-4e93-b9fb-3d8a62ffdb72-kube-api-access-dmnsg\") pod \"manila-operator-controller-manager-54f6768c69-mnln4\" (UID: \"fab34061-20c2-4e93-b9fb-3d8a62ffdb72\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.879718 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5vt\" (UniqueName: \"kubernetes.io/projected/7bd80a27-b40d-4a43-8956-01e91ba58029-kube-api-access-vf5vt\") pod \"mariadb-operator-controller-manager-6994f66f48-5v97q\" (UID: \"7bd80a27-b40d-4a43-8956-01e91ba58029\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.901138 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.902465 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.906680 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8hxb4" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.917247 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.969552 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7"] Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.970937 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.980626 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvmp\" (UniqueName: \"kubernetes.io/projected/bb6257e0-6420-4136-858f-ee944d0493e3-kube-api-access-5lvmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-s886f\" (UID: \"bb6257e0-6420-4136-858f-ee944d0493e3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.980751 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmc9c\" (UniqueName: \"kubernetes.io/projected/8717609a-7f7e-4de2-b0ec-93cc0539c922-kube-api-access-dmc9c\") pod \"keystone-operator-controller-manager-b4d948c87-57f4l\" (UID: \"8717609a-7f7e-4de2-b0ec-93cc0539c922\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.980806 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmnsg\" (UniqueName: \"kubernetes.io/projected/fab34061-20c2-4e93-b9fb-3d8a62ffdb72-kube-api-access-dmnsg\") pod \"manila-operator-controller-manager-54f6768c69-mnln4\" (UID: \"fab34061-20c2-4e93-b9fb-3d8a62ffdb72\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.980833 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5vt\" (UniqueName: \"kubernetes.io/projected/7bd80a27-b40d-4a43-8956-01e91ba58029-kube-api-access-vf5vt\") pod \"mariadb-operator-controller-manager-6994f66f48-5v97q\" (UID: \"7bd80a27-b40d-4a43-8956-01e91ba58029\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.981147 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.988004 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-szncp" Feb 18 16:46:33 crc kubenswrapper[4812]: I0218 16:46:33.989905 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.001212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.025700 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmnsg\" (UniqueName: \"kubernetes.io/projected/fab34061-20c2-4e93-b9fb-3d8a62ffdb72-kube-api-access-dmnsg\") pod \"manila-operator-controller-manager-54f6768c69-mnln4\" (UID: \"fab34061-20c2-4e93-b9fb-3d8a62ffdb72\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.025804 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5vt\" (UniqueName: \"kubernetes.io/projected/7bd80a27-b40d-4a43-8956-01e91ba58029-kube-api-access-vf5vt\") pod \"mariadb-operator-controller-manager-6994f66f48-5v97q\" (UID: \"7bd80a27-b40d-4a43-8956-01e91ba58029\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.026380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmc9c\" (UniqueName: \"kubernetes.io/projected/8717609a-7f7e-4de2-b0ec-93cc0539c922-kube-api-access-dmc9c\") pod \"keystone-operator-controller-manager-b4d948c87-57f4l\" (UID: \"8717609a-7f7e-4de2-b0ec-93cc0539c922\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.027203 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.028382 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.032318 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bmgl8" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.043664 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.055475 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.057361 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.067411 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.077577 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.079128 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.086682 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.087009 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hmxxl" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.087261 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bxvwp" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.088281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvmp\" (UniqueName: \"kubernetes.io/projected/bb6257e0-6420-4136-858f-ee944d0493e3-kube-api-access-5lvmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-s886f\" (UID: \"bb6257e0-6420-4136-858f-ee944d0493e3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.088365 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4ht\" (UniqueName: \"kubernetes.io/projected/d2a5bf35-89b6-4fee-94e4-d118f9cfacc3-kube-api-access-5f4ht\") pod \"octavia-operator-controller-manager-69f8888797-gvlj5\" (UID: \"d2a5bf35-89b6-4fee-94e4-d118f9cfacc3\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.088411 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrrr\" (UniqueName: \"kubernetes.io/projected/8dde41a0-6a01-4fdf-afe1-caf72e221917-kube-api-access-vnrrr\") pod \"nova-operator-controller-manager-567668f5cf-7t4r7\" (UID: \"8dde41a0-6a01-4fdf-afe1-caf72e221917\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.088438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.088458 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjrj\" (UniqueName: \"kubernetes.io/projected/8cfe2837-e258-42f2-8634-f20c3142d708-kube-api-access-5mjrj\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.090489 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.091902 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.102733 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dwkz7" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.104744 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.121167 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ths6j"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.131339 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.132028 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.136481 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.136639 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.143383 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8z6jb" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.143723 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-68gml" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.150718 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.150941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvmp\" (UniqueName: \"kubernetes.io/projected/bb6257e0-6420-4136-858f-ee944d0493e3-kube-api-access-5lvmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-s886f\" (UID: \"bb6257e0-6420-4136-858f-ee944d0493e3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.173351 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.176944 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.188149 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ths6j"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.189703 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.190418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4ht\" (UniqueName: \"kubernetes.io/projected/d2a5bf35-89b6-4fee-94e4-d118f9cfacc3-kube-api-access-5f4ht\") pod \"octavia-operator-controller-manager-69f8888797-gvlj5\" (UID: \"d2a5bf35-89b6-4fee-94e4-d118f9cfacc3\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.190601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8dw\" (UniqueName: \"kubernetes.io/projected/1992f7af-ff5e-4b9d-9820-134811e95a33-kube-api-access-th8dw\") pod \"telemetry-operator-controller-manager-7f45b4ff68-kztm9\" (UID: \"1992f7af-ff5e-4b9d-9820-134811e95a33\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.190799 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrrr\" (UniqueName: \"kubernetes.io/projected/8dde41a0-6a01-4fdf-afe1-caf72e221917-kube-api-access-vnrrr\") pod \"nova-operator-controller-manager-567668f5cf-7t4r7\" (UID: \"8dde41a0-6a01-4fdf-afe1-caf72e221917\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.191286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.191411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjrj\" (UniqueName: \"kubernetes.io/projected/8cfe2837-e258-42f2-8634-f20c3142d708-kube-api-access-5mjrj\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.191981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkcqk\" (UniqueName: \"kubernetes.io/projected/77a78e58-327b-41c9-9476-ed0c0d665938-kube-api-access-nkcqk\") pod \"swift-operator-controller-manager-68f46476f-ths6j\" (UID: \"77a78e58-327b-41c9-9476-ed0c0d665938\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.191781 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.192217 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:34.692197398 +0000 UTC m=+1014.957808307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.192501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.192753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2lrz\" (UniqueName: \"kubernetes.io/projected/d9f004a9-719f-44da-8afc-8d107e751740-kube-api-access-c2lrz\") pod \"placement-operator-controller-manager-8497b45c89-8ts5w\" (UID: \"d9f004a9-719f-44da-8afc-8d107e751740\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.192846 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cc2\" (UniqueName: \"kubernetes.io/projected/07b7334e-7887-47ab-b54a-950e0abef136-kube-api-access-h5cc2\") pod \"ovn-operator-controller-manager-d44cf6b75-svznc\" (UID: \"07b7334e-7887-47ab-b54a-950e0abef136\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.192719 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.193072 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:46:35.19306053 +0000 UTC m=+1015.458671439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.195083 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gpk4n"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.198651 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.201187 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kkfm6" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.211344 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gpk4n"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.217940 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrrr\" (UniqueName: \"kubernetes.io/projected/8dde41a0-6a01-4fdf-afe1-caf72e221917-kube-api-access-vnrrr\") pod \"nova-operator-controller-manager-567668f5cf-7t4r7\" (UID: \"8dde41a0-6a01-4fdf-afe1-caf72e221917\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.222650 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.223941 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.229298 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4ht\" (UniqueName: \"kubernetes.io/projected/d2a5bf35-89b6-4fee-94e4-d118f9cfacc3-kube-api-access-5f4ht\") pod \"octavia-operator-controller-manager-69f8888797-gvlj5\" (UID: \"d2a5bf35-89b6-4fee-94e4-d118f9cfacc3\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.229961 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7ntt2" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.243770 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.253488 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.289229 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.291424 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.292635 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ds5f\" (UniqueName: \"kubernetes.io/projected/77dd6c28-0191-413f-90f0-9c85b340dd9c-kube-api-access-7ds5f\") pod \"test-operator-controller-manager-7866795846-gpk4n\" (UID: \"77dd6c28-0191-413f-90f0-9c85b340dd9c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8dw\" (UniqueName: \"kubernetes.io/projected/1992f7af-ff5e-4b9d-9820-134811e95a33-kube-api-access-th8dw\") pod \"telemetry-operator-controller-manager-7f45b4ff68-kztm9\" (UID: \"1992f7af-ff5e-4b9d-9820-134811e95a33\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294570 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294860 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkcqk\" (UniqueName: \"kubernetes.io/projected/77a78e58-327b-41c9-9476-ed0c0d665938-kube-api-access-nkcqk\") pod \"swift-operator-controller-manager-68f46476f-ths6j\" (UID: \"77a78e58-327b-41c9-9476-ed0c0d665938\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294908 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294924 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2lrz\" (UniqueName: \"kubernetes.io/projected/d9f004a9-719f-44da-8afc-8d107e751740-kube-api-access-c2lrz\") pod \"placement-operator-controller-manager-8497b45c89-8ts5w\" (UID: \"d9f004a9-719f-44da-8afc-8d107e751740\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.294974 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cc2\" (UniqueName: \"kubernetes.io/projected/07b7334e-7887-47ab-b54a-950e0abef136-kube-api-access-h5cc2\") pod \"ovn-operator-controller-manager-d44cf6b75-svznc\" (UID: \"07b7334e-7887-47ab-b54a-950e0abef136\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.295012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qvb\" (UniqueName: \"kubernetes.io/projected/97e0541c-504a-4610-b930-db20a8c00302-kube-api-access-j5qvb\") pod \"watcher-operator-controller-manager-55ccccfbc7-nmczt\" (UID: \"97e0541c-504a-4610-b930-db20a8c00302\") " pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.296988 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jqb5s" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.297275 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjrj\" (UniqueName: \"kubernetes.io/projected/8cfe2837-e258-42f2-8634-f20c3142d708-kube-api-access-5mjrj\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.317068 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.324248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2lrz\" (UniqueName: \"kubernetes.io/projected/d9f004a9-719f-44da-8afc-8d107e751740-kube-api-access-c2lrz\") pod \"placement-operator-controller-manager-8497b45c89-8ts5w\" (UID: \"d9f004a9-719f-44da-8afc-8d107e751740\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.327027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8dw\" (UniqueName: \"kubernetes.io/projected/1992f7af-ff5e-4b9d-9820-134811e95a33-kube-api-access-th8dw\") pod \"telemetry-operator-controller-manager-7f45b4ff68-kztm9\" (UID: \"1992f7af-ff5e-4b9d-9820-134811e95a33\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.327209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkcqk\" (UniqueName: \"kubernetes.io/projected/77a78e58-327b-41c9-9476-ed0c0d665938-kube-api-access-nkcqk\") pod \"swift-operator-controller-manager-68f46476f-ths6j\" (UID: \"77a78e58-327b-41c9-9476-ed0c0d665938\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.327379 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.329172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cc2\" (UniqueName: \"kubernetes.io/projected/07b7334e-7887-47ab-b54a-950e0abef136-kube-api-access-h5cc2\") pod \"ovn-operator-controller-manager-d44cf6b75-svznc\" (UID: \"07b7334e-7887-47ab-b54a-950e0abef136\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.390605 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.396727 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.396876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ds5f\" (UniqueName: \"kubernetes.io/projected/77dd6c28-0191-413f-90f0-9c85b340dd9c-kube-api-access-7ds5f\") pod \"test-operator-controller-manager-7866795846-gpk4n\" (UID: \"77dd6c28-0191-413f-90f0-9c85b340dd9c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.396917 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qh4\" (UniqueName: \"kubernetes.io/projected/7ee716d3-9aa5-4c80-872a-7183662658a1-kube-api-access-l8qh4\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.397003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.397050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qvb\" (UniqueName: \"kubernetes.io/projected/97e0541c-504a-4610-b930-db20a8c00302-kube-api-access-j5qvb\") pod \"watcher-operator-controller-manager-55ccccfbc7-nmczt\" (UID: \"97e0541c-504a-4610-b930-db20a8c00302\") " pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.401506 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.402613 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.405040 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4mpzn" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.424688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qvb\" (UniqueName: \"kubernetes.io/projected/97e0541c-504a-4610-b930-db20a8c00302-kube-api-access-j5qvb\") pod \"watcher-operator-controller-manager-55ccccfbc7-nmczt\" (UID: \"97e0541c-504a-4610-b930-db20a8c00302\") " pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.426834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ds5f\" (UniqueName: \"kubernetes.io/projected/77dd6c28-0191-413f-90f0-9c85b340dd9c-kube-api-access-7ds5f\") pod \"test-operator-controller-manager-7866795846-gpk4n\" (UID: \"77dd6c28-0191-413f-90f0-9c85b340dd9c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.427058 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66"] Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.470218 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.484966 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.498939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.499047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.499115 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfgp\" (UniqueName: \"kubernetes.io/projected/66bb936b-e65a-4f8a-8e24-3066bb11f30e-kube-api-access-jzfgp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njc66\" (UID: \"66bb936b-e65a-4f8a-8e24-3066bb11f30e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.499179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qh4\" (UniqueName: \"kubernetes.io/projected/7ee716d3-9aa5-4c80-872a-7183662658a1-kube-api-access-l8qh4\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.499263 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.499391 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:34.999357522 +0000 UTC m=+1015.264968601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.499789 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.499859 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:34.999839264 +0000 UTC m=+1015.265450373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.534986 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qh4\" (UniqueName: \"kubernetes.io/projected/7ee716d3-9aa5-4c80-872a-7183662658a1-kube-api-access-l8qh4\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.625675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfgp\" (UniqueName: \"kubernetes.io/projected/66bb936b-e65a-4f8a-8e24-3066bb11f30e-kube-api-access-jzfgp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njc66\" (UID: \"66bb936b-e65a-4f8a-8e24-3066bb11f30e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.675060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfgp\" (UniqueName: \"kubernetes.io/projected/66bb936b-e65a-4f8a-8e24-3066bb11f30e-kube-api-access-jzfgp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njc66\" (UID: \"66bb936b-e65a-4f8a-8e24-3066bb11f30e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.728288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.728710 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: E0218 16:46:34.729526 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:35.729502556 +0000 UTC m=+1015.995113465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.804748 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.823473 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:46:34 crc kubenswrapper[4812]: I0218 16:46:34.889489 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.033651 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.034135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.034401 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.034488 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:36.034455086 +0000 UTC m=+1016.300066175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.034553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.034705 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.034748 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:36.034738322 +0000 UTC m=+1016.300349231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.171891 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.199975 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.210560 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78fc7ff4_fa73_4323_9756_db5902a66158.slice/crio-9b2ac5f9912826d706eb88a19a76e937d353fd0869f04ca58bfcdd5ac9eec9bb WatchSource:0}: Error finding container 9b2ac5f9912826d706eb88a19a76e937d353fd0869f04ca58bfcdd5ac9eec9bb: Status 404 returned error can't find the container with id 9b2ac5f9912826d706eb88a19a76e937d353fd0869f04ca58bfcdd5ac9eec9bb Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.217255 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.237361 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.237642 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.238143 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:46:37.238080066 +0000 UTC m=+1017.503691135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.408251 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.408894 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c31acb_2c1f_4923_833a_68de35fb9d54.slice/crio-9189f5ed0a1ee2943d54343c04f832f00ecdb78af1d4e0683e970b2969216e29 WatchSource:0}: Error finding container 9189f5ed0a1ee2943d54343c04f832f00ecdb78af1d4e0683e970b2969216e29: Status 404 returned error can't find the container with id 9189f5ed0a1ee2943d54343c04f832f00ecdb78af1d4e0683e970b2969216e29 Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.415879 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.447880 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.449139 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3c44e4_8127_4e37_a4be_44e1b85ef218.slice/crio-48a6889a1b42b5cc65a044e0fefe87cc89de8413c8a6940502937d3f1dfbb050 WatchSource:0}: Error finding container 48a6889a1b42b5cc65a044e0fefe87cc89de8413c8a6940502937d3f1dfbb050: Status 404 returned error can't find the container with id 48a6889a1b42b5cc65a044e0fefe87cc89de8413c8a6940502937d3f1dfbb050 Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.454367 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196a8044_f16c_465d_a1e4_e1e6703bf050.slice/crio-b7b1c76d8b0d5bcbe72913f63fdffc3477895d28818e64f67436b2b0f9bc63e9 WatchSource:0}: Error finding container b7b1c76d8b0d5bcbe72913f63fdffc3477895d28818e64f67436b2b0f9bc63e9: Status 404 returned error can't find the container with id b7b1c76d8b0d5bcbe72913f63fdffc3477895d28818e64f67436b2b0f9bc63e9 Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.458232 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.608150 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.622333 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.622969 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.625535 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a5bf35_89b6_4fee_94e4_d118f9cfacc3.slice/crio-2ec96573732a5d30a2b73f9e9a30f240b0ccc9d9b60ed6c17118438c90b6a803 WatchSource:0}: Error finding container 2ec96573732a5d30a2b73f9e9a30f240b0ccc9d9b60ed6c17118438c90b6a803: Status 404 returned error can't find the container with id 2ec96573732a5d30a2b73f9e9a30f240b0ccc9d9b60ed6c17118438c90b6a803 Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.632117 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.658379 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.677800 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc"] Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.692492 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5cc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-svznc_openstack-operators(07b7334e-7887-47ab-b54a-950e0abef136): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.693649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" podUID="07b7334e-7887-47ab-b54a-950e0abef136" Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.693734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" event={"ID":"d2a5bf35-89b6-4fee-94e4-d118f9cfacc3","Type":"ContainerStarted","Data":"2ec96573732a5d30a2b73f9e9a30f240b0ccc9d9b60ed6c17118438c90b6a803"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.703933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" event={"ID":"ab3c44e4-8127-4e37-a4be-44e1b85ef218","Type":"ContainerStarted","Data":"48a6889a1b42b5cc65a044e0fefe87cc89de8413c8a6940502937d3f1dfbb050"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.706903 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.707372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" event={"ID":"aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa","Type":"ContainerStarted","Data":"042fe868258bbdab6b7d1d11a334ccce548d041e5c5ccebe47316ce84e8bb3df"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.711610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" event={"ID":"1def9ee0-6aa7-4cc0-a709-a66e4c952d03","Type":"ContainerStarted","Data":"932b72cc62c33c5166b83c50afabd6f24a0c9d986d2772713f2e269749ea6a7a"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.714116 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" event={"ID":"c5c31acb-2c1f-4923-833a-68de35fb9d54","Type":"ContainerStarted","Data":"9189f5ed0a1ee2943d54343c04f832f00ecdb78af1d4e0683e970b2969216e29"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.718298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" event={"ID":"bb6257e0-6420-4136-858f-ee944d0493e3","Type":"ContainerStarted","Data":"4f2e528f09df15eaa2d00af8c24c40f3893d02cfbddf9a4733fef4b7189762fa"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.730152 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" event={"ID":"78fc7ff4-fa73-4323-9756-db5902a66158","Type":"ContainerStarted","Data":"9b2ac5f9912826d706eb88a19a76e937d353fd0869f04ca58bfcdd5ac9eec9bb"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.734199 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" event={"ID":"0f46711f-425e-4dbb-8a5d-ed6084adfde8","Type":"ContainerStarted","Data":"2198227d51e1122b4a65dd4ad404588fbccecc979d561da07af870e60cc43e4e"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.735157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" event={"ID":"196a8044-f16c-465d-a1e4-e1e6703bf050","Type":"ContainerStarted","Data":"b7b1c76d8b0d5bcbe72913f63fdffc3477895d28818e64f67436b2b0f9bc63e9"} Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.779948 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-ths6j"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.782367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.782677 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.782746 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:37.782724086 +0000 UTC m=+1018.048335005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.792573 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nkcqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-ths6j_openstack-operators(77a78e58-327b-41c9-9476-ed0c0d665938): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.793838 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" podUID="77a78e58-327b-41c9-9476-ed0c0d665938" Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.824648 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-gpk4n"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.837007 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.843805 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.852173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.857715 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66bb936b_e65a_4f8a_8e24_3066bb11f30e.slice/crio-55a10b28725485b6693878d50d3ec0a225dcb60239c633334158e07772b968a6 WatchSource:0}: Error finding container 55a10b28725485b6693878d50d3ec0a225dcb60239c633334158e07772b968a6: Status 404 returned error can't find the container with id 55a10b28725485b6693878d50d3ec0a225dcb60239c633334158e07772b968a6 Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.860725 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt"] Feb 18 16:46:35 crc kubenswrapper[4812]: I0218 16:46:35.868899 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9"] Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.875411 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f004a9_719f_44da_8afc_8d107e751740.slice/crio-5e460b4a931bf2807e67477319a67000b747353c0dc37827b13ed7fab848f63b WatchSource:0}: Error finding container 5e460b4a931bf2807e67477319a67000b747353c0dc37827b13ed7fab848f63b: Status 404 returned error can't find the container with id 5e460b4a931bf2807e67477319a67000b747353c0dc37827b13ed7fab848f63b Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.876508 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77dd6c28_0191_413f_90f0_9c85b340dd9c.slice/crio-e06b46a26a01a4a980e0c18769a2f9c5c15c7abff9f8266b9de0c2cfaca1be6c WatchSource:0}: Error finding container e06b46a26a01a4a980e0c18769a2f9c5c15c7abff9f8266b9de0c2cfaca1be6c: Status 404 returned error can't find the container with id e06b46a26a01a4a980e0c18769a2f9c5c15c7abff9f8266b9de0c2cfaca1be6c Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.878832 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfab34061_20c2_4e93_b9fb_3d8a62ffdb72.slice/crio-cb95507ad7e81cc9e4a444a93d25a280405e4f39c29d8fcc345241213c05295c WatchSource:0}: Error finding container cb95507ad7e81cc9e4a444a93d25a280405e4f39c29d8fcc345241213c05295c: Status 404 returned error can't find the container with id cb95507ad7e81cc9e4a444a93d25a280405e4f39c29d8fcc345241213c05295c Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.883330 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1992f7af_ff5e_4b9d_9820_134811e95a33.slice/crio-9b249dbdcd55b58e2884a181c05ef91af32bb91c4df2f80d78953c5bb9857892 WatchSource:0}: Error finding container 9b249dbdcd55b58e2884a181c05ef91af32bb91c4df2f80d78953c5bb9857892: Status 404 returned error can't find the container with id 9b249dbdcd55b58e2884a181c05ef91af32bb91c4df2f80d78953c5bb9857892 Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.884540 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2lrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-8ts5w_openstack-operators(d9f004a9-719f-44da-8afc-8d107e751740): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.884914 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ds5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-gpk4n_openstack-operators(77dd6c28-0191-413f-90f0-9c85b340dd9c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.885717 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" podUID="d9f004a9-719f-44da-8afc-8d107e751740" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.886000 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" podUID="77dd6c28-0191-413f-90f0-9c85b340dd9c" Feb 18 16:46:35 crc kubenswrapper[4812]: W0218 16:46:35.886401 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e0541c_504a_4610_b930_db20a8c00302.slice/crio-61c324c66f297bd060de790cec291b49377b35784c006468107002a9034c102e WatchSource:0}: Error finding container 61c324c66f297bd060de790cec291b49377b35784c006468107002a9034c102e: Status 404 returned error can't find the container with id 61c324c66f297bd060de790cec291b49377b35784c006468107002a9034c102e Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.888853 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th8dw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-kztm9_openstack-operators(1992f7af-ff5e-4b9d-9820-134811e95a33): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.888917 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmnsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-mnln4_openstack-operators(fab34061-20c2-4e93-b9fb-3d8a62ffdb72): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.890158 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" podUID="fab34061-20c2-4e93-b9fb-3d8a62ffdb72" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.890740 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.243:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5qvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-55ccccfbc7-nmczt_openstack-operators(97e0541c-504a-4610-b930-db20a8c00302): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.890692 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" podUID="1992f7af-ff5e-4b9d-9820-134811e95a33" Feb 18 16:46:35 crc kubenswrapper[4812]: E0218 16:46:35.891925 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" podUID="97e0541c-504a-4610-b930-db20a8c00302" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.095286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.095364 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.095545 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.095622 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:38.095602021 +0000 UTC m=+1018.361212930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.095682 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.095702 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:38.095696673 +0000 UTC m=+1018.361307582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.747747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" event={"ID":"66bb936b-e65a-4f8a-8e24-3066bb11f30e","Type":"ContainerStarted","Data":"55a10b28725485b6693878d50d3ec0a225dcb60239c633334158e07772b968a6"} Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.751118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" event={"ID":"77a78e58-327b-41c9-9476-ed0c0d665938","Type":"ContainerStarted","Data":"df2a9a98fdbd2cfe86c75032de2aa2b434bfb955e46b4d02caed85bb323906da"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.753566 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" podUID="77a78e58-327b-41c9-9476-ed0c0d665938" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.777078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" event={"ID":"97e0541c-504a-4610-b930-db20a8c00302","Type":"ContainerStarted","Data":"61c324c66f297bd060de790cec291b49377b35784c006468107002a9034c102e"} Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.783802 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" event={"ID":"fab34061-20c2-4e93-b9fb-3d8a62ffdb72","Type":"ContainerStarted","Data":"cb95507ad7e81cc9e4a444a93d25a280405e4f39c29d8fcc345241213c05295c"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.785238 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" podUID="97e0541c-504a-4610-b930-db20a8c00302" Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.785839 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" podUID="fab34061-20c2-4e93-b9fb-3d8a62ffdb72" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.788487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" event={"ID":"d9f004a9-719f-44da-8afc-8d107e751740","Type":"ContainerStarted","Data":"5e460b4a931bf2807e67477319a67000b747353c0dc37827b13ed7fab848f63b"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.793246 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" podUID="d9f004a9-719f-44da-8afc-8d107e751740" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.803036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" event={"ID":"77dd6c28-0191-413f-90f0-9c85b340dd9c","Type":"ContainerStarted","Data":"e06b46a26a01a4a980e0c18769a2f9c5c15c7abff9f8266b9de0c2cfaca1be6c"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.806220 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" podUID="77dd6c28-0191-413f-90f0-9c85b340dd9c" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.809550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" event={"ID":"8717609a-7f7e-4de2-b0ec-93cc0539c922","Type":"ContainerStarted","Data":"21524b6b2414c479bc93470329d0b9de4175487e8605f4e0970b851f248a014e"} Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.811651 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" event={"ID":"1992f7af-ff5e-4b9d-9820-134811e95a33","Type":"ContainerStarted","Data":"9b249dbdcd55b58e2884a181c05ef91af32bb91c4df2f80d78953c5bb9857892"} Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.814000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" event={"ID":"8dde41a0-6a01-4fdf-afe1-caf72e221917","Type":"ContainerStarted","Data":"dd851d0a16ce52bd257db53da22d9eccdbbd93c3d15a904a93b09c418bde79e6"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.814163 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" podUID="1992f7af-ff5e-4b9d-9820-134811e95a33" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.836732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" event={"ID":"07b7334e-7887-47ab-b54a-950e0abef136","Type":"ContainerStarted","Data":"2724deedd3073b53181b64c3632a827754311a422d9adf3fc217073d565b82c0"} Feb 18 16:46:36 crc kubenswrapper[4812]: E0218 16:46:36.868581 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" podUID="07b7334e-7887-47ab-b54a-950e0abef136" Feb 18 16:46:36 crc kubenswrapper[4812]: I0218 16:46:36.875253 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" event={"ID":"7bd80a27-b40d-4a43-8956-01e91ba58029","Type":"ContainerStarted","Data":"9f9e2085f91241d4ea307fc3d7d8811e69862236e84e7ef38629a172a16976e0"} Feb 18 16:46:37 crc kubenswrapper[4812]: I0218 16:46:37.329168 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.329450 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.329585 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:46:41.329538957 +0000 UTC m=+1021.595149866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:37 crc kubenswrapper[4812]: I0218 16:46:37.837353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.837586 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.837649 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:41.837629595 +0000 UTC m=+1022.103240504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.912496 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" podUID="fab34061-20c2-4e93-b9fb-3d8a62ffdb72" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.912682 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" podUID="77dd6c28-0191-413f-90f0-9c85b340dd9c" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.913793 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" podUID="1992f7af-ff5e-4b9d-9820-134811e95a33" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.913874 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" podUID="97e0541c-504a-4610-b930-db20a8c00302" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.913973 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" podUID="d9f004a9-719f-44da-8afc-8d107e751740" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.914029 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" podUID="77a78e58-327b-41c9-9476-ed0c0d665938" Feb 18 16:46:37 crc kubenswrapper[4812]: E0218 16:46:37.934047 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" podUID="07b7334e-7887-47ab-b54a-950e0abef136" Feb 18 16:46:38 crc kubenswrapper[4812]: I0218 16:46:38.152960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:38 crc kubenswrapper[4812]: I0218 16:46:38.153053 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:38 crc kubenswrapper[4812]: E0218 16:46:38.153242 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:38 crc kubenswrapper[4812]: E0218 16:46:38.153298 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:42.153281678 +0000 UTC m=+1022.418892587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:38 crc kubenswrapper[4812]: E0218 16:46:38.153719 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:38 crc kubenswrapper[4812]: E0218 16:46:38.153747 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:42.153739479 +0000 UTC m=+1022.419350388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:41 crc kubenswrapper[4812]: I0218 16:46:41.428870 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:41 crc kubenswrapper[4812]: E0218 16:46:41.429240 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:41 crc kubenswrapper[4812]: E0218 16:46:41.429373 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:46:49.429340257 +0000 UTC m=+1029.694951326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:41 crc kubenswrapper[4812]: I0218 16:46:41.937254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:41 crc kubenswrapper[4812]: E0218 16:46:41.937545 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:41 crc kubenswrapper[4812]: E0218 16:46:41.937687 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:49.937648839 +0000 UTC m=+1030.203259788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:42 crc kubenswrapper[4812]: I0218 16:46:42.242866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:42 crc kubenswrapper[4812]: I0218 16:46:42.242996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:42 crc kubenswrapper[4812]: E0218 16:46:42.243279 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:42 crc kubenswrapper[4812]: E0218 16:46:42.243361 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:50.243338927 +0000 UTC m=+1030.508949846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:42 crc kubenswrapper[4812]: E0218 16:46:42.243872 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:42 crc kubenswrapper[4812]: E0218 16:46:42.243926 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:46:50.243912111 +0000 UTC m=+1030.509523040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.454332 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.455154 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4dln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-wq29b_openstack-operators(0f46711f-425e-4dbb-8a5d-ed6084adfde8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.456712 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" podUID="0f46711f-425e-4dbb-8a5d-ed6084adfde8" Feb 18 16:46:49 crc kubenswrapper[4812]: I0218 16:46:49.478598 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.478805 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.478893 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert podName:08ea33ce-0d14-439c-9e63-f06d21d6907a nodeName:}" failed. No retries permitted until 2026-02-18 16:47:05.478872966 +0000 UTC m=+1045.744483875 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert") pod "infra-operator-controller-manager-79d975b745-cwrzs" (UID: "08ea33ce-0d14-439c-9e63-f06d21d6907a") : secret "infra-operator-webhook-server-cert" not found Feb 18 16:46:49 crc kubenswrapper[4812]: I0218 16:46:49.985631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.985837 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:49 crc kubenswrapper[4812]: E0218 16:46:49.985898 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert podName:8cfe2837-e258-42f2-8634-f20c3142d708 nodeName:}" failed. No retries permitted until 2026-02-18 16:47:05.985881187 +0000 UTC m=+1046.251492096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" (UID: "8cfe2837-e258-42f2-8634-f20c3142d708") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.013785 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" podUID="0f46711f-425e-4dbb-8a5d-ed6084adfde8" Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.158618 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.158809 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldjz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-q4s5n_openstack-operators(196a8044-f16c-465d-a1e4-e1e6703bf050): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.160156 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" podUID="196a8044-f16c-465d-a1e4-e1e6703bf050" Feb 18 16:46:50 crc kubenswrapper[4812]: I0218 16:46:50.289705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:50 crc kubenswrapper[4812]: I0218 16:46:50.289801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.290013 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.290164 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:47:06.290132049 +0000 UTC m=+1046.555743118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "metrics-server-cert" not found Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.290027 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 16:46:50 crc kubenswrapper[4812]: E0218 16:46:50.290671 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs podName:7ee716d3-9aa5-4c80-872a-7183662658a1 nodeName:}" failed. No retries permitted until 2026-02-18 16:47:06.290659012 +0000 UTC m=+1046.556270131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs") pod "openstack-operator-controller-manager-7d47b7586b-kpwkf" (UID: "7ee716d3-9aa5-4c80-872a-7183662658a1") : secret "webhook-server-cert" not found Feb 18 16:46:51 crc kubenswrapper[4812]: E0218 16:46:51.042537 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 18 16:46:51 crc kubenswrapper[4812]: E0218 16:46:51.043518 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldfdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-fgsbn_openstack-operators(1def9ee0-6aa7-4cc0-a709-a66e4c952d03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:51 crc kubenswrapper[4812]: E0218 16:46:51.044638 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" podUID="1def9ee0-6aa7-4cc0-a709-a66e4c952d03" Feb 18 16:46:51 crc kubenswrapper[4812]: E0218 16:46:51.045934 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" podUID="196a8044-f16c-465d-a1e4-e1e6703bf050" Feb 18 16:46:52 crc kubenswrapper[4812]: E0218 16:46:52.052315 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" podUID="1def9ee0-6aa7-4cc0-a709-a66e4c952d03" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.194997 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.195544 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmc9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-57f4l_openstack-operators(8717609a-7f7e-4de2-b0ec-93cc0539c922): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.196775 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" podUID="8717609a-7f7e-4de2-b0ec-93cc0539c922" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.643852 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.644056 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzfgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-njc66_openstack-operators(66bb936b-e65a-4f8a-8e24-3066bb11f30e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:54 crc kubenswrapper[4812]: E0218 16:46:54.645573 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" podUID="66bb936b-e65a-4f8a-8e24-3066bb11f30e" Feb 18 16:46:55 crc kubenswrapper[4812]: E0218 16:46:55.344599 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" podUID="66bb936b-e65a-4f8a-8e24-3066bb11f30e" Feb 18 16:46:55 crc kubenswrapper[4812]: E0218 16:46:55.344895 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" podUID="8717609a-7f7e-4de2-b0ec-93cc0539c922" Feb 18 16:46:55 crc kubenswrapper[4812]: E0218 16:46:55.722054 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 16:46:55 crc kubenswrapper[4812]: E0218 16:46:55.722777 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnrrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-7t4r7_openstack-operators(8dde41a0-6a01-4fdf-afe1-caf72e221917): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:46:55 crc kubenswrapper[4812]: E0218 16:46:55.723921 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" podUID="8dde41a0-6a01-4fdf-afe1-caf72e221917" Feb 18 16:46:56 crc kubenswrapper[4812]: E0218 16:46:56.350482 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" podUID="8dde41a0-6a01-4fdf-afe1-caf72e221917" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.474591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" event={"ID":"78fc7ff4-fa73-4323-9756-db5902a66158","Type":"ContainerStarted","Data":"09aae6547be9c47cff11fbd323b61550d55b8002964015b2f889859b952ea2a4"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.477302 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.479410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" event={"ID":"97e0541c-504a-4610-b930-db20a8c00302","Type":"ContainerStarted","Data":"0e42718cb4e30f8fdd392b1a4a5b93d8133637e1336376ec9149fbca4d1bf60f"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.480219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.482322 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" event={"ID":"77dd6c28-0191-413f-90f0-9c85b340dd9c","Type":"ContainerStarted","Data":"50484d44c177754f7e9a6307d70a99944e1f76546957363edf4288e780e2b550"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.482776 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.488082 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" event={"ID":"07b7334e-7887-47ab-b54a-950e0abef136","Type":"ContainerStarted","Data":"4241098bbbeff88c3239526a950cf34cf202afa6971129a301be619d7a88d48b"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.488307 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.495140 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" event={"ID":"bb6257e0-6420-4136-858f-ee944d0493e3","Type":"ContainerStarted","Data":"65e74fb99c24ec69ba3436b39495b71a279321491b443824ce35705c93b6a46a"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.496157 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.500112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" event={"ID":"1992f7af-ff5e-4b9d-9820-134811e95a33","Type":"ContainerStarted","Data":"9af892338d87533a0c57abbd71c66464ebcd486501121c54ede0cb76efa0db22"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.500438 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.502433 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" event={"ID":"fab34061-20c2-4e93-b9fb-3d8a62ffdb72","Type":"ContainerStarted","Data":"02330290e7ab442e78177160c78eeeea2c662dbf3c304adee90db55b3c5011fb"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.502720 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.513532 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" podStartSLOduration=7.019185985 podStartE2EDuration="27.513502949s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.21712628 +0000 UTC m=+1015.482737179" lastFinishedPulling="2026-02-18 16:46:55.711443244 +0000 UTC m=+1035.977054143" observedRunningTime="2026-02-18 16:47:00.50664721 +0000 UTC m=+1040.772258129" watchObservedRunningTime="2026-02-18 16:47:00.513502949 +0000 UTC m=+1040.779113858" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.527837 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.527897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" event={"ID":"ab3c44e4-8127-4e37-a4be-44e1b85ef218","Type":"ContainerStarted","Data":"1ae9122dad6c378c32a5a1a79c175a079bd2c429cdf4cea3f9b92a20c8a13e9c"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.527928 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.527945 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" event={"ID":"aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa","Type":"ContainerStarted","Data":"c063f1d71f2634171cfedce69cc6b52fec65683851960ab71120a724a9d77574"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.528018 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" event={"ID":"77a78e58-327b-41c9-9476-ed0c0d665938","Type":"ContainerStarted","Data":"0ed61037212be4c7ccab3a1870b52d45de33b92bb72f485ff9ec0a57624737ca"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.528401 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.532556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" event={"ID":"d2a5bf35-89b6-4fee-94e4-d118f9cfacc3","Type":"ContainerStarted","Data":"de2aa6267559bc50291be8312cecceb469c220e73e8c7c901756677ba8a352a7"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.532757 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.535117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" event={"ID":"d9f004a9-719f-44da-8afc-8d107e751740","Type":"ContainerStarted","Data":"7a017663b5fd06a073401858b6d9f75f153418657ba95bebfea7f6c711f210e1"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.535441 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.538564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" event={"ID":"7bd80a27-b40d-4a43-8956-01e91ba58029","Type":"ContainerStarted","Data":"9a17b7b579bd186b9b798a655adf5c52e81378bc0ff5ca033f8b2670989b98fc"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.539544 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.542192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" event={"ID":"c5c31acb-2c1f-4923-833a-68de35fb9d54","Type":"ContainerStarted","Data":"f5347629d2d7bfdcc5b8b9c75a02669630fd1153145c31ba1356a968b85fb606"} Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.543286 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.625524 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" podStartSLOduration=3.689828402 podStartE2EDuration="27.62550434s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.692249335 +0000 UTC m=+1015.957860244" lastFinishedPulling="2026-02-18 16:46:59.627925273 +0000 UTC m=+1039.893536182" observedRunningTime="2026-02-18 16:47:00.592971848 +0000 UTC m=+1040.858582767" watchObservedRunningTime="2026-02-18 16:47:00.62550434 +0000 UTC m=+1040.891115249" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.627052 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" podStartSLOduration=3.876401132 podStartE2EDuration="27.627047168s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.884740041 +0000 UTC m=+1016.150350950" lastFinishedPulling="2026-02-18 16:46:59.635386077 +0000 UTC m=+1039.900996986" observedRunningTime="2026-02-18 16:47:00.621268606 +0000 UTC m=+1040.886879515" watchObservedRunningTime="2026-02-18 16:47:00.627047168 +0000 UTC m=+1040.892658077" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.664930 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" podStartSLOduration=3.937523919 podStartE2EDuration="27.664909502s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.890496863 +0000 UTC m=+1016.156107772" lastFinishedPulling="2026-02-18 16:46:59.617882446 +0000 UTC m=+1039.883493355" observedRunningTime="2026-02-18 16:47:00.664152873 +0000 UTC m=+1040.929763782" watchObservedRunningTime="2026-02-18 16:47:00.664909502 +0000 UTC m=+1040.930520411" Feb 18 16:47:00 crc kubenswrapper[4812]: I0218 16:47:00.712429 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" podStartSLOduration=3.984538578 podStartE2EDuration="27.712408553s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.887902899 +0000 UTC m=+1016.153513808" lastFinishedPulling="2026-02-18 16:46:59.615772874 +0000 UTC m=+1039.881383783" observedRunningTime="2026-02-18 16:47:00.707984474 +0000 UTC m=+1040.973595403" watchObservedRunningTime="2026-02-18 16:47:00.712408553 +0000 UTC m=+1040.978019462" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.014829 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" podStartSLOduration=7.502176315 podStartE2EDuration="28.014795959s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.683069269 +0000 UTC m=+1015.948680168" lastFinishedPulling="2026-02-18 16:46:56.195688903 +0000 UTC m=+1036.461299812" observedRunningTime="2026-02-18 16:47:01.014301437 +0000 UTC m=+1041.279912346" watchObservedRunningTime="2026-02-18 16:47:01.014795959 +0000 UTC m=+1041.280406868" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.038337 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" podStartSLOduration=4.309386778 podStartE2EDuration="28.038317309s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.888310269 +0000 UTC m=+1016.153921178" lastFinishedPulling="2026-02-18 16:46:59.6172408 +0000 UTC m=+1039.882851709" observedRunningTime="2026-02-18 16:47:01.036667029 +0000 UTC m=+1041.302277938" watchObservedRunningTime="2026-02-18 16:47:01.038317309 +0000 UTC m=+1041.303928218" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.259720 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" podStartSLOduration=4.466076732 podStartE2EDuration="28.259655197s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.792401314 +0000 UTC m=+1016.058012223" lastFinishedPulling="2026-02-18 16:46:59.585979779 +0000 UTC m=+1039.851590688" observedRunningTime="2026-02-18 16:47:01.254320575 +0000 UTC m=+1041.519931484" watchObservedRunningTime="2026-02-18 16:47:01.259655197 +0000 UTC m=+1041.525266106" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.303666 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" podStartSLOduration=7.809230185 podStartE2EDuration="28.303645482s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.216986546 +0000 UTC m=+1015.482597455" lastFinishedPulling="2026-02-18 16:46:55.711401843 +0000 UTC m=+1035.977012752" observedRunningTime="2026-02-18 16:47:01.302729869 +0000 UTC m=+1041.568340778" watchObservedRunningTime="2026-02-18 16:47:01.303645482 +0000 UTC m=+1041.569256391" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.324512 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" podStartSLOduration=8.298812108 podStartE2EDuration="28.324493136s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.685785346 +0000 UTC m=+1015.951396265" lastFinishedPulling="2026-02-18 16:46:55.711466384 +0000 UTC m=+1035.977077293" observedRunningTime="2026-02-18 16:47:01.319666317 +0000 UTC m=+1041.585277226" watchObservedRunningTime="2026-02-18 16:47:01.324493136 +0000 UTC m=+1041.590104045" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.363968 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" podStartSLOduration=4.612984384 podStartE2EDuration="28.363946028s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.884380652 +0000 UTC m=+1016.149991571" lastFinishedPulling="2026-02-18 16:46:59.635342316 +0000 UTC m=+1039.900953215" observedRunningTime="2026-02-18 16:47:01.355297345 +0000 UTC m=+1041.620908254" watchObservedRunningTime="2026-02-18 16:47:01.363946028 +0000 UTC m=+1041.629556937" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.386490 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" podStartSLOduration=8.093551616 podStartE2EDuration="28.386458014s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.419637823 +0000 UTC m=+1015.685248742" lastFinishedPulling="2026-02-18 16:46:55.712544231 +0000 UTC m=+1035.978155140" observedRunningTime="2026-02-18 16:47:01.379345318 +0000 UTC m=+1041.644956227" watchObservedRunningTime="2026-02-18 16:47:01.386458014 +0000 UTC m=+1041.652068933" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.413733 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" podStartSLOduration=8.156176079 podStartE2EDuration="28.413709435s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.45398084 +0000 UTC m=+1015.719591749" lastFinishedPulling="2026-02-18 16:46:55.711514196 +0000 UTC m=+1035.977125105" observedRunningTime="2026-02-18 16:47:01.413594993 +0000 UTC m=+1041.679205902" watchObservedRunningTime="2026-02-18 16:47:01.413709435 +0000 UTC m=+1041.679320344" Feb 18 16:47:01 crc kubenswrapper[4812]: I0218 16:47:01.444811 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" podStartSLOduration=8.379735043 podStartE2EDuration="28.444779942s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.646295232 +0000 UTC m=+1015.911906141" lastFinishedPulling="2026-02-18 16:46:55.711340131 +0000 UTC m=+1035.976951040" observedRunningTime="2026-02-18 16:47:01.438283751 +0000 UTC m=+1041.703894660" watchObservedRunningTime="2026-02-18 16:47:01.444779942 +0000 UTC m=+1041.710390851" Feb 18 16:47:04 crc kubenswrapper[4812]: I0218 16:47:04.240967 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5v97q" Feb 18 16:47:04 crc kubenswrapper[4812]: I0218 16:47:04.315761 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-gvlj5" Feb 18 16:47:04 crc kubenswrapper[4812]: I0218 16:47:04.574241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" event={"ID":"0f46711f-425e-4dbb-8a5d-ed6084adfde8","Type":"ContainerStarted","Data":"c25081f9e8c085029f883c5be63f99d7f379d5c4e2aaf1b0f4383259a40b2162"} Feb 18 16:47:04 crc kubenswrapper[4812]: I0218 16:47:04.574613 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:47:04 crc kubenswrapper[4812]: I0218 16:47:04.608668 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" podStartSLOduration=2.886626008 podStartE2EDuration="31.608643793s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.421708834 +0000 UTC m=+1015.687319733" lastFinishedPulling="2026-02-18 16:47:04.143726609 +0000 UTC m=+1044.409337518" observedRunningTime="2026-02-18 16:47:04.605241059 +0000 UTC m=+1044.870851968" watchObservedRunningTime="2026-02-18 16:47:04.608643793 +0000 UTC m=+1044.874254702" Feb 18 16:47:05 crc kubenswrapper[4812]: I0218 16:47:05.494191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:47:05 crc kubenswrapper[4812]: I0218 16:47:05.503039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ea33ce-0d14-439c-9e63-f06d21d6907a-cert\") pod \"infra-operator-controller-manager-79d975b745-cwrzs\" (UID: \"08ea33ce-0d14-439c-9e63-f06d21d6907a\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:47:05 crc kubenswrapper[4812]: I0218 16:47:05.651082 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tcjpl" Feb 18 16:47:05 crc kubenswrapper[4812]: I0218 16:47:05.658983 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.005341 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.010887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cfe2837-e258-42f2-8634-f20c3142d708-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk\" (UID: \"8cfe2837-e258-42f2-8634-f20c3142d708\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.090236 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs"] Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.151935 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hmxxl" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.160182 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.309806 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.310432 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.314565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-webhook-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.314627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ee716d3-9aa5-4c80-872a-7183662658a1-metrics-certs\") pod \"openstack-operator-controller-manager-7d47b7586b-kpwkf\" (UID: \"7ee716d3-9aa5-4c80-872a-7183662658a1\") " pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.387623 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk"] Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.495575 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jqb5s" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.503275 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.602350 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" event={"ID":"196a8044-f16c-465d-a1e4-e1e6703bf050","Type":"ContainerStarted","Data":"71c2c9306f91b7ef45e4337bdc2847750dc4f9bdcf20b9596d028b8f3cfd00dc"} Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.602841 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.604010 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" event={"ID":"8cfe2837-e258-42f2-8634-f20c3142d708","Type":"ContainerStarted","Data":"9c3e605aba3b2a554b66a62a6a0b6245ee18be8f8644eeffe6fe56755c6fb8b7"} Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.605196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" event={"ID":"08ea33ce-0d14-439c-9e63-f06d21d6907a","Type":"ContainerStarted","Data":"a658617879a079aa6268aa7784be3068e8870c539c3a3785d5535548dc2891ed"} Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.623745 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" podStartSLOduration=3.536133033 podStartE2EDuration="33.62372851s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.467023472 +0000 UTC m=+1015.732634381" lastFinishedPulling="2026-02-18 16:47:05.554618949 +0000 UTC m=+1045.820229858" observedRunningTime="2026-02-18 16:47:06.618138712 +0000 UTC m=+1046.883749621" watchObservedRunningTime="2026-02-18 16:47:06.62372851 +0000 UTC m=+1046.889339419" Feb 18 16:47:06 crc kubenswrapper[4812]: I0218 16:47:06.977819 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf"] Feb 18 16:47:07 crc kubenswrapper[4812]: I0218 16:47:07.615496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" event={"ID":"7ee716d3-9aa5-4c80-872a-7183662658a1","Type":"ContainerStarted","Data":"3a2f381cc28632e9eda9825eee90c89aa90cbadcf2f14dc56b65f5be585de8c8"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.637877 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" event={"ID":"7ee716d3-9aa5-4c80-872a-7183662658a1","Type":"ContainerStarted","Data":"2f52c42790553bb80c3b907a735194cb38f216005a04c65f35d901384b0a148e"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.639429 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.650695 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" event={"ID":"66bb936b-e65a-4f8a-8e24-3066bb11f30e","Type":"ContainerStarted","Data":"972c87c9554d8fc817a11550b2183570ea8cd7adb3fda2c9712955b7357dea2e"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.652867 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" event={"ID":"8717609a-7f7e-4de2-b0ec-93cc0539c922","Type":"ContainerStarted","Data":"2c6123d17a4a5abb2f51a59819d94b8b336af80980b5c29e13771f78a64b9e3a"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.653173 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.655351 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" event={"ID":"8dde41a0-6a01-4fdf-afe1-caf72e221917","Type":"ContainerStarted","Data":"ae51ab04124a0fe4f3e189b2859fbceadbc3e7a67d5cff3a4496bffcf2081927"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.655687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.660756 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" event={"ID":"1def9ee0-6aa7-4cc0-a709-a66e4c952d03","Type":"ContainerStarted","Data":"2788ebc3eb33117d0025a24721eadf31bf77d43e3484af0da1982b5b3ddfe233"} Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.661053 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.688448 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" podStartSLOduration=36.688423414 podStartE2EDuration="36.688423414s" podCreationTimestamp="2026-02-18 16:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:47:10.681601645 +0000 UTC m=+1050.947212564" watchObservedRunningTime="2026-02-18 16:47:10.688423414 +0000 UTC m=+1050.954034323" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.703955 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" podStartSLOduration=2.96786718 podStartE2EDuration="37.703933336s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.682272139 +0000 UTC m=+1015.947883048" lastFinishedPulling="2026-02-18 16:47:10.418338295 +0000 UTC m=+1050.683949204" observedRunningTime="2026-02-18 16:47:10.697807865 +0000 UTC m=+1050.963418774" watchObservedRunningTime="2026-02-18 16:47:10.703933336 +0000 UTC m=+1050.969544245" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.719911 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" podStartSLOduration=3.089044708 podStartE2EDuration="37.71988828s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.665361852 +0000 UTC m=+1015.930972761" lastFinishedPulling="2026-02-18 16:47:10.296205424 +0000 UTC m=+1050.561816333" observedRunningTime="2026-02-18 16:47:10.716754632 +0000 UTC m=+1050.982365541" watchObservedRunningTime="2026-02-18 16:47:10.71988828 +0000 UTC m=+1050.985499189" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.740306 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njc66" podStartSLOduration=2.542509399 podStartE2EDuration="36.740254242s" podCreationTimestamp="2026-02-18 16:46:34 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.861462697 +0000 UTC m=+1016.127073606" lastFinishedPulling="2026-02-18 16:47:10.05920754 +0000 UTC m=+1050.324818449" observedRunningTime="2026-02-18 16:47:10.739247487 +0000 UTC m=+1051.004858396" watchObservedRunningTime="2026-02-18 16:47:10.740254242 +0000 UTC m=+1051.005865151" Feb 18 16:47:10 crc kubenswrapper[4812]: I0218 16:47:10.766713 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" podStartSLOduration=3.302935742 podStartE2EDuration="37.766689364s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:46:35.674531798 +0000 UTC m=+1015.940142707" lastFinishedPulling="2026-02-18 16:47:10.13828542 +0000 UTC m=+1050.403896329" observedRunningTime="2026-02-18 16:47:10.76166938 +0000 UTC m=+1051.027280299" watchObservedRunningTime="2026-02-18 16:47:10.766689364 +0000 UTC m=+1051.032300273" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.683951 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rns6l" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.703356 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zw426" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.720902 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-wq29b" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.746342 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-q4s5n" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.796552 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qfk95" Feb 18 16:47:13 crc kubenswrapper[4812]: I0218 16:47:13.867945 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dtgqt" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.183199 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-mnln4" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.272583 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-s886f" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.395037 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-svznc" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.473430 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8ts5w" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.489160 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-kztm9" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.809415 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-ths6j" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.835413 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-gpk4n" Feb 18 16:47:14 crc kubenswrapper[4812]: I0218 16:47:14.895032 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-55ccccfbc7-nmczt" Feb 18 16:47:16 crc kubenswrapper[4812]: I0218 16:47:16.519297 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d47b7586b-kpwkf" Feb 18 16:47:22 crc kubenswrapper[4812]: E0218 16:47:22.647858 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" Feb 18 16:47:22 crc kubenswrapper[4812]: E0218 16:47:22.649087 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mjrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk_openstack-operators(8cfe2837-e258-42f2-8634-f20c3142d708): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:47:22 crc kubenswrapper[4812]: E0218 16:47:22.650496 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" podUID="8cfe2837-e258-42f2-8634-f20c3142d708" Feb 18 16:47:23 crc kubenswrapper[4812]: I0218 16:47:23.974836 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-fgsbn" Feb 18 16:47:24 crc kubenswrapper[4812]: I0218 16:47:24.157342 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-57f4l" Feb 18 16:47:24 crc kubenswrapper[4812]: I0218 16:47:24.330826 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7t4r7" Feb 18 16:47:25 crc kubenswrapper[4812]: E0218 16:47:25.883852 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" podUID="8cfe2837-e258-42f2-8634-f20c3142d708" Feb 18 16:47:26 crc kubenswrapper[4812]: I0218 16:47:26.836302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" event={"ID":"08ea33ce-0d14-439c-9e63-f06d21d6907a","Type":"ContainerStarted","Data":"ece0bc9fd7ec41b9b7137f570dd0e20cbe57343d5bf8553e4f2c6d182a248a6c"} Feb 18 16:47:26 crc kubenswrapper[4812]: I0218 16:47:26.836899 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:47:26 crc kubenswrapper[4812]: I0218 16:47:26.869890 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" podStartSLOduration=34.071584143 podStartE2EDuration="53.869861625s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:47:06.101492183 +0000 UTC m=+1046.367103092" lastFinishedPulling="2026-02-18 16:47:25.899769635 +0000 UTC m=+1066.165380574" observedRunningTime="2026-02-18 16:47:26.866183834 +0000 UTC m=+1067.131794763" watchObservedRunningTime="2026-02-18 16:47:26.869861625 +0000 UTC m=+1067.135472544" Feb 18 16:47:33 crc kubenswrapper[4812]: I0218 16:47:33.413459 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:47:33 crc kubenswrapper[4812]: I0218 16:47:33.414248 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:47:35 crc kubenswrapper[4812]: I0218 16:47:35.666149 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cwrzs" Feb 18 16:47:42 crc kubenswrapper[4812]: I0218 16:47:42.977522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" event={"ID":"8cfe2837-e258-42f2-8634-f20c3142d708","Type":"ContainerStarted","Data":"9f9364a66edb49d5c13c7ff79ee645befd42ad00fa61274e0fc756a070a6c79e"} Feb 18 16:47:42 crc kubenswrapper[4812]: I0218 16:47:42.978610 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:47:56 crc kubenswrapper[4812]: I0218 16:47:56.181380 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" Feb 18 16:47:56 crc kubenswrapper[4812]: I0218 16:47:56.219864 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk" podStartSLOduration=47.184672515 podStartE2EDuration="1m23.219847603s" podCreationTimestamp="2026-02-18 16:46:33 +0000 UTC" firstStartedPulling="2026-02-18 16:47:06.39296486 +0000 UTC m=+1046.658575769" lastFinishedPulling="2026-02-18 16:47:42.428139938 +0000 UTC m=+1082.693750857" observedRunningTime="2026-02-18 16:47:43.020120374 +0000 UTC m=+1083.285731313" watchObservedRunningTime="2026-02-18 16:47:56.219847603 +0000 UTC m=+1096.485458512" Feb 18 16:48:03 crc kubenswrapper[4812]: I0218 16:48:03.413556 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:48:03 crc kubenswrapper[4812]: I0218 16:48:03.414079 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.663320 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.665046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.667530 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xtqlt" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.671262 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.671456 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.671578 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.688406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.736282 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.737643 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.741467 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.750622 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.773385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.773565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbdgp\" (UniqueName: \"kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.875327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbdgp\" (UniqueName: \"kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.875401 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfc5\" (UniqueName: \"kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.875495 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.875540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.875570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.876406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.905521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbdgp\" (UniqueName: \"kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp\") pod \"dnsmasq-dns-675f4bcbfc-bqjt6\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.976548 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.976629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfc5\" (UniqueName: \"kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.976714 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.977649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.977711 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:14 crc kubenswrapper[4812]: I0218 16:48:14.982715 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:15 crc kubenswrapper[4812]: I0218 16:48:15.009602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfc5\" (UniqueName: \"kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5\") pod \"dnsmasq-dns-78dd6ddcc-7dg6v\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:15 crc kubenswrapper[4812]: I0218 16:48:15.052179 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:15 crc kubenswrapper[4812]: I0218 16:48:15.453225 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:15 crc kubenswrapper[4812]: I0218 16:48:15.575330 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:15 crc kubenswrapper[4812]: W0218 16:48:15.582395 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc127b084_aeb8_43be_8e37_9b42b615865e.slice/crio-f785f10df9301656f6e8fb2abfba24ee36ef86a85c7d6ed48733866cd7864928 WatchSource:0}: Error finding container f785f10df9301656f6e8fb2abfba24ee36ef86a85c7d6ed48733866cd7864928: Status 404 returned error can't find the container with id f785f10df9301656f6e8fb2abfba24ee36ef86a85c7d6ed48733866cd7864928 Feb 18 16:48:16 crc kubenswrapper[4812]: I0218 16:48:16.289545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" event={"ID":"f305ff39-6a36-4ec6-b856-a147992d05d4","Type":"ContainerStarted","Data":"134dc089e4901b6300413b29a8ae2912167922efeafd36559ff8aa63a5967225"} Feb 18 16:48:16 crc kubenswrapper[4812]: I0218 16:48:16.290804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" event={"ID":"c127b084-aeb8-43be-8e37-9b42b615865e","Type":"ContainerStarted","Data":"f785f10df9301656f6e8fb2abfba24ee36ef86a85c7d6ed48733866cd7864928"} Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.551536 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.579958 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.581621 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.596157 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.720436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.720537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.720570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlc4\" (UniqueName: \"kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.828846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.829247 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.829302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvlc4\" (UniqueName: \"kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.830489 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.830998 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.887879 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvlc4\" (UniqueName: \"kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4\") pod \"dnsmasq-dns-666b6646f7-5k6hk\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.906387 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.912131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.940340 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.941708 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:17 crc kubenswrapper[4812]: I0218 16:48:17.956434 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.033209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqk2\" (UniqueName: \"kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.033288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.033364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.135832 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.135937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqk2\" (UniqueName: \"kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.135971 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.136838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.137069 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.174313 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqk2\" (UniqueName: \"kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2\") pod \"dnsmasq-dns-57d769cc4f-hcvqh\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.377776 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.584396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:48:18 crc kubenswrapper[4812]: W0218 16:48:18.591811 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f876dc8_3a88_458c_9a0c_704963d7a1c7.slice/crio-870cc3cac83c168c5c25d014b49498beb04966311bb528720bc1ab3a5de0a218 WatchSource:0}: Error finding container 870cc3cac83c168c5c25d014b49498beb04966311bb528720bc1ab3a5de0a218: Status 404 returned error can't find the container with id 870cc3cac83c168c5c25d014b49498beb04966311bb528720bc1ab3a5de0a218 Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.753566 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.755448 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.758751 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759059 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759145 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759384 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759502 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759067 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.759990 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mw59b" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.763717 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.852773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.852829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.852873 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.852928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.852986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853008 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853032 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2xd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853063 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853147 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.853196 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.911391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:48:18 crc kubenswrapper[4812]: W0218 16:48:18.923468 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52313df7_7636_4aa8_a55e_8520ae930395.slice/crio-e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874 WatchSource:0}: Error finding container e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874: Status 404 returned error can't find the container with id e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874 Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954559 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954637 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2xd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954664 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954689 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954708 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954749 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.954845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.955538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.956225 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.956711 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.956850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.956910 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.958990 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.966985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.967394 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.971557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.974403 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.979821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2xd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:18 crc kubenswrapper[4812]: I0218 16:48:18.995427 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " pod="openstack/rabbitmq-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.062942 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.068602 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.073425 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.074583 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.074599 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.075015 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.075371 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nnbr4" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.075584 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.075831 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.090679 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.110872 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.171858 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.171978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172253 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5x4n\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172415 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.172464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274226 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274287 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274330 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274427 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5x4n\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.274643 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.276219 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.276547 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.277758 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.278318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.281224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.285215 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.286429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.287458 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.289808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.292653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.302906 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5x4n\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.367187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" event={"ID":"52313df7-7636-4aa8-a55e-8520ae930395","Type":"ContainerStarted","Data":"e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874"} Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.368729 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" event={"ID":"2f876dc8-3a88-458c-9a0c-704963d7a1c7","Type":"ContainerStarted","Data":"870cc3cac83c168c5c25d014b49498beb04966311bb528720bc1ab3a5de0a218"} Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.370607 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.420492 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:48:19 crc kubenswrapper[4812]: I0218 16:48:19.739515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.154313 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.156253 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.159727 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.170082 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.172354 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.172489 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.172761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hf2rx" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.180298 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.282861 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307391 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307517 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhl9\" (UniqueName: \"kubernetes.io/projected/a1c50cf6-624a-4342-bc66-3a0789879e55-kube-api-access-7lhl9\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307618 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307732 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.307755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.384644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerStarted","Data":"7d2a218ba282068334d1cfd30430311a0e24b9c96c0005dcc46b9d174813bded"} Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.389377 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerStarted","Data":"4268b435751aecab5844e7154db881f9ac9ad303cdb6118aa6ef7d9589659b7f"} Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409202 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409280 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409331 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhl9\" (UniqueName: \"kubernetes.io/projected/a1c50cf6-624a-4342-bc66-3a0789879e55-kube-api-access-7lhl9\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.409458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.410223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-kolla-config\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.410805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.410961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-config-data-default\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.411042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1c50cf6-624a-4342-bc66-3a0789879e55-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.411255 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.421039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.434181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c50cf6-624a-4342-bc66-3a0789879e55-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.452860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhl9\" (UniqueName: \"kubernetes.io/projected/a1c50cf6-624a-4342-bc66-3a0789879e55-kube-api-access-7lhl9\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.510735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"a1c50cf6-624a-4342-bc66-3a0789879e55\") " pod="openstack/openstack-galera-0" Feb 18 16:48:20 crc kubenswrapper[4812]: I0218 16:48:20.799681 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.476822 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.479215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.482455 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gcx7t" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.483219 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.483436 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.483727 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.489118 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.547653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.547817 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.547865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.547942 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.547976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.548235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.548452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgvp\" (UniqueName: \"kubernetes.io/projected/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kube-api-access-msgvp\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.548493 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.599349 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.603155 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.606215 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-cl9kk" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.606746 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.606900 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.628307 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.650634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.650913 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.650985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kolla-config\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651463 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgvp\" (UniqueName: \"kubernetes.io/projected/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kube-api-access-msgvp\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651562 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvtw\" (UniqueName: \"kubernetes.io/projected/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kube-api-access-pkvtw\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-config-data\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.651675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.652112 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.652514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.652518 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.652570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.654292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.661126 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.665483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.688505 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgvp\" (UniqueName: \"kubernetes.io/projected/3d1c27f6-1144-40ce-a66c-a2c1fb4aa128-kube-api-access-msgvp\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.696186 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128\") " pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.746289 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.754614 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kolla-config\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.754715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.754744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkvtw\" (UniqueName: \"kubernetes.io/projected/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kube-api-access-pkvtw\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.754796 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-config-data\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.754821 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.755600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kolla-config\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.756018 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e21653f2-3333-4f74-b1c7-3d34c6ab4280-config-data\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.757956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.760138 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21653f2-3333-4f74-b1c7-3d34c6ab4280-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: W0218 16:48:21.765591 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c50cf6_624a_4342_bc66_3a0789879e55.slice/crio-2214ddf59ae90ab68b60a7710d8580c820f87a444e6f51c4f2a8b562132f6840 WatchSource:0}: Error finding container 2214ddf59ae90ab68b60a7710d8580c820f87a444e6f51c4f2a8b562132f6840: Status 404 returned error can't find the container with id 2214ddf59ae90ab68b60a7710d8580c820f87a444e6f51c4f2a8b562132f6840 Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.777873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkvtw\" (UniqueName: \"kubernetes.io/projected/e21653f2-3333-4f74-b1c7-3d34c6ab4280-kube-api-access-pkvtw\") pod \"memcached-0\" (UID: \"e21653f2-3333-4f74-b1c7-3d34c6ab4280\") " pod="openstack/memcached-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.823815 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 16:48:21 crc kubenswrapper[4812]: I0218 16:48:21.931363 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 16:48:22 crc kubenswrapper[4812]: I0218 16:48:22.426661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a1c50cf6-624a-4342-bc66-3a0789879e55","Type":"ContainerStarted","Data":"2214ddf59ae90ab68b60a7710d8580c820f87a444e6f51c4f2a8b562132f6840"} Feb 18 16:48:22 crc kubenswrapper[4812]: I0218 16:48:22.440826 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 16:48:22 crc kubenswrapper[4812]: W0218 16:48:22.487818 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d1c27f6_1144_40ce_a66c_a2c1fb4aa128.slice/crio-91a98896cbb130ba087852941391c92517a09f8dcff98be98982e266d6b3c442 WatchSource:0}: Error finding container 91a98896cbb130ba087852941391c92517a09f8dcff98be98982e266d6b3c442: Status 404 returned error can't find the container with id 91a98896cbb130ba087852941391c92517a09f8dcff98be98982e266d6b3c442 Feb 18 16:48:22 crc kubenswrapper[4812]: I0218 16:48:22.561138 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 16:48:22 crc kubenswrapper[4812]: W0218 16:48:22.563627 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode21653f2_3333_4f74_b1c7_3d34c6ab4280.slice/crio-ffc57a56f2a456a7f1dfa04ed1d67e3679c74c6b60e4ebba21ba6bc8d5c8aa4b WatchSource:0}: Error finding container ffc57a56f2a456a7f1dfa04ed1d67e3679c74c6b60e4ebba21ba6bc8d5c8aa4b: Status 404 returned error can't find the container with id ffc57a56f2a456a7f1dfa04ed1d67e3679c74c6b60e4ebba21ba6bc8d5c8aa4b Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.453901 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128","Type":"ContainerStarted","Data":"91a98896cbb130ba087852941391c92517a09f8dcff98be98982e266d6b3c442"} Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.456833 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e21653f2-3333-4f74-b1c7-3d34c6ab4280","Type":"ContainerStarted","Data":"ffc57a56f2a456a7f1dfa04ed1d67e3679c74c6b60e4ebba21ba6bc8d5c8aa4b"} Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.964720 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.966001 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.970155 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9pcpl" Feb 18 16:48:23 crc kubenswrapper[4812]: I0218 16:48:23.986249 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:48:24 crc kubenswrapper[4812]: I0218 16:48:24.009026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxkm\" (UniqueName: \"kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm\") pod \"kube-state-metrics-0\" (UID: \"edbfaf09-13f2-49f8-8f32-5b149c8c69be\") " pod="openstack/kube-state-metrics-0" Feb 18 16:48:24 crc kubenswrapper[4812]: I0218 16:48:24.113078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxkm\" (UniqueName: \"kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm\") pod \"kube-state-metrics-0\" (UID: \"edbfaf09-13f2-49f8-8f32-5b149c8c69be\") " pod="openstack/kube-state-metrics-0" Feb 18 16:48:24 crc kubenswrapper[4812]: I0218 16:48:24.167962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxkm\" (UniqueName: \"kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm\") pod \"kube-state-metrics-0\" (UID: \"edbfaf09-13f2-49f8-8f32-5b149c8c69be\") " pod="openstack/kube-state-metrics-0" Feb 18 16:48:24 crc kubenswrapper[4812]: I0218 16:48:24.376253 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.284432 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.291801 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.297116 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.298468 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.299437 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.300001 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2nrwm" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.301230 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.301870 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.302405 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.305978 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.308972 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.412936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzw2\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.412986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413134 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413206 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.413328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515220 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515331 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515351 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzw2\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515477 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.515499 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.516860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.517550 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.519040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.522768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.522916 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.523047 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.523108 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af460646b9286704a29606a0b72ed4f0b878dd755da4447874f6899e9b871ead/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.529410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.529598 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.530051 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.537074 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzw2\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.576361 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:25 crc kubenswrapper[4812]: I0218 16:48:25.629576 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.888783 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n9n6z"] Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.893826 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.897212 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-q7zth" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.897463 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.897609 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.909788 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9n6z"] Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.953340 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-s46ps"] Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.955120 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:26 crc kubenswrapper[4812]: I0218 16:48:26.976903 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s46ps"] Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.047798 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.047855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-scripts\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.047905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wff\" (UniqueName: \"kubernetes.io/projected/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-kube-api-access-z7wff\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.047972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-combined-ca-bundle\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzc2s\" (UniqueName: \"kubernetes.io/projected/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-kube-api-access-kzc2s\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-log-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-run\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-ovn-controller-tls-certs\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-etc-ovs\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048314 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-lib\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-log\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.048360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-scripts\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.149938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.149996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-ovn-controller-tls-certs\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-etc-ovs\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150149 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-lib\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-log\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-scripts\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-scripts\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wff\" (UniqueName: \"kubernetes.io/projected/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-kube-api-access-z7wff\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-combined-ca-bundle\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzc2s\" (UniqueName: \"kubernetes.io/projected/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-kube-api-access-kzc2s\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150375 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-log-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.150404 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-run\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.151785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-run\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.151848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.152781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-run-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.152933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-etc-ovs\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.153208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-lib\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.153584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-var-log-ovn\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.153702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-var-log\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.156960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-scripts\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.166876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-ovn-controller-tls-certs\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.167035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-scripts\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.174250 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wff\" (UniqueName: \"kubernetes.io/projected/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-kube-api-access-z7wff\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.175672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a2e707c-718f-4f17-9b77-c883f7e9d9f3-combined-ca-bundle\") pod \"ovn-controller-n9n6z\" (UID: \"2a2e707c-718f-4f17-9b77-c883f7e9d9f3\") " pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.176581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzc2s\" (UniqueName: \"kubernetes.io/projected/8a06b1c0-26fd-448a-ba31-9b6ff58ebab8-kube-api-access-kzc2s\") pod \"ovn-controller-ovs-s46ps\" (UID: \"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8\") " pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.224014 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.278518 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.329183 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.331615 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.335501 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fsgsk" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.335735 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.337409 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.337642 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.351091 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.353364 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.492247 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.492329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.492367 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.493034 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.493073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.493115 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.493145 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwqr\" (UniqueName: \"kubernetes.io/projected/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-kube-api-access-gtwqr\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.493185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595056 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595172 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwqr\" (UniqueName: \"kubernetes.io/projected/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-kube-api-access-gtwqr\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.595998 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.596052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.596411 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.596419 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.596630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.610064 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.610067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.612433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.612990 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.621171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwqr\" (UniqueName: \"kubernetes.io/projected/5b6943b9-4519-4dc3-9be1-96aa9eedcfda-kube-api-access-gtwqr\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.653397 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6943b9-4519-4dc3-9be1-96aa9eedcfda\") " pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:27 crc kubenswrapper[4812]: I0218 16:48:27.671502 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.338965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.341592 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.351044 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.390742 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.391267 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-276tf" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.391827 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.392004 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497831 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497910 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.497990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.498019 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.498065 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtbc\" (UniqueName: \"kubernetes.io/projected/f4c01c8a-b0df-43ab-9097-d619e00981d2-kube-api-access-dvtbc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.600003 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtbc\" (UniqueName: \"kubernetes.io/projected/f4c01c8a-b0df-43ab-9097-d619e00981d2-kube-api-access-dvtbc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.600810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.600836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.600920 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.600993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.601036 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.601060 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.601084 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.602289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.602510 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.604657 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.604752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c01c8a-b0df-43ab-9097-d619e00981d2-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.608158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.611292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.621411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4c01c8a-b0df-43ab-9097-d619e00981d2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.625949 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtbc\" (UniqueName: \"kubernetes.io/projected/f4c01c8a-b0df-43ab-9097-d619e00981d2-kube-api-access-dvtbc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.639960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4c01c8a-b0df-43ab-9097-d619e00981d2\") " pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:31 crc kubenswrapper[4812]: I0218 16:48:31.710332 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.413596 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.414170 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.414249 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.415309 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.415381 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369" gracePeriod=600 Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.567230 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369" exitCode=0 Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.567286 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369"} Feb 18 16:48:33 crc kubenswrapper[4812]: I0218 16:48:33.567326 4812 scope.go:117] "RemoveContainer" containerID="8db7425fe928d69d12f7dc9bac881fc646a50e16e3c8af3940ba384104ff64e3" Feb 18 16:48:39 crc kubenswrapper[4812]: E0218 16:48:39.206186 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 16:48:39 crc kubenswrapper[4812]: E0218 16:48:39.207309 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvlc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5k6hk_openstack(2f876dc8-3a88-458c-9a0c-704963d7a1c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:48:39 crc kubenswrapper[4812]: E0218 16:48:39.208510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" Feb 18 16:48:39 crc kubenswrapper[4812]: E0218 16:48:39.714786 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" Feb 18 16:48:43 crc kubenswrapper[4812]: E0218 16:48:43.671044 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 18 16:48:43 crc kubenswrapper[4812]: E0218 16:48:43.671436 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n9fh5f8h586h689h686h568h547h598h9ch5f6hdbh645h57fh56dh56fh568hbch577h5ddh87h675h9bhc9h547h5b8h699h5c6h544h584h5d8h564h74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkvtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(e21653f2-3333-4f74-b1c7-3d34c6ab4280): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:48:43 crc kubenswrapper[4812]: E0218 16:48:43.672617 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="e21653f2-3333-4f74-b1c7-3d34c6ab4280" Feb 18 16:48:43 crc kubenswrapper[4812]: E0218 16:48:43.807268 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="e21653f2-3333-4f74-b1c7-3d34c6ab4280" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.006554 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.007077 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slqk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-hcvqh_openstack(52313df7-7636-4aa8-a55e-8520ae930395): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.008781 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" podUID="52313df7-7636-4aa8-a55e-8520ae930395" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.081025 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.081285 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cfc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7dg6v_openstack(c127b084-aeb8-43be-8e37-9b42b615865e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.085464 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" podUID="c127b084-aeb8-43be-8e37-9b42b615865e" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.251599 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.252131 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbdgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bqjt6_openstack(f305ff39-6a36-4ec6-b856-a147992d05d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.253350 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" podUID="f305ff39-6a36-4ec6-b856-a147992d05d4" Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.381158 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9n6z"] Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.519275 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.549357 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.672791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.767456 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s46ps"] Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.768224 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6943b9-4519-4dc3-9be1-96aa9eedcfda","Type":"ContainerStarted","Data":"d545d868251377ff92887f6ccb2ef2b79dfeb54ed65696299ee40de61f051502"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.772627 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a1c50cf6-624a-4342-bc66-3a0789879e55","Type":"ContainerStarted","Data":"772a36280ebdbcdd3f3024e5599b8d09310803351ec5424be415fee53d8b7de5"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.774120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z" event={"ID":"2a2e707c-718f-4f17-9b77-c883f7e9d9f3","Type":"ContainerStarted","Data":"c031b1558ec99ad99da20bc550ae6ca9c0f75108841c16aba28f8356ed22c08f"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.779610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.781469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128","Type":"ContainerStarted","Data":"56eeaa96e1bed3c8bddcc8febaea278dd37176ad1e82917ef3fc31b6cde6f17a"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.787337 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerStarted","Data":"61009d85a65e5cd74404c1212c253c551f0f4fc5bf8eae7b393d9922a37c4549"} Feb 18 16:48:44 crc kubenswrapper[4812]: I0218 16:48:44.807891 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbfaf09-13f2-49f8-8f32-5b149c8c69be","Type":"ContainerStarted","Data":"3d2fc5aa7a5e7f6b3bf08c13f442e5d5fe0b18dc4864198529d7c51f3c95c39f"} Feb 18 16:48:44 crc kubenswrapper[4812]: E0218 16:48:44.810262 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" podUID="52313df7-7636-4aa8-a55e-8520ae930395" Feb 18 16:48:44 crc kubenswrapper[4812]: W0218 16:48:44.945502 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a06b1c0_26fd_448a_ba31_9b6ff58ebab8.slice/crio-65923627b878995488c5c3396f8d1b6a4fd8e4b9c712bcbb0729f287827a1c9a WatchSource:0}: Error finding container 65923627b878995488c5c3396f8d1b6a4fd8e4b9c712bcbb0729f287827a1c9a: Status 404 returned error can't find the container with id 65923627b878995488c5c3396f8d1b6a4fd8e4b9c712bcbb0729f287827a1c9a Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.512347 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.542595 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.553692 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:45 crc kubenswrapper[4812]: W0218 16:48:45.556347 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4c01c8a_b0df_43ab_9097_d619e00981d2.slice/crio-1758e4f185c20a49d0a5df3db2d68e5f37a89a89af175f56e071bc70c075bd6d WatchSource:0}: Error finding container 1758e4f185c20a49d0a5df3db2d68e5f37a89a89af175f56e071bc70c075bd6d: Status 404 returned error can't find the container with id 1758e4f185c20a49d0a5df3db2d68e5f37a89a89af175f56e071bc70c075bd6d Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.601999 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cfc5\" (UniqueName: \"kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5\") pod \"c127b084-aeb8-43be-8e37-9b42b615865e\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.602039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbdgp\" (UniqueName: \"kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp\") pod \"f305ff39-6a36-4ec6-b856-a147992d05d4\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.602073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config\") pod \"c127b084-aeb8-43be-8e37-9b42b615865e\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.602128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc\") pod \"c127b084-aeb8-43be-8e37-9b42b615865e\" (UID: \"c127b084-aeb8-43be-8e37-9b42b615865e\") " Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.602263 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config\") pod \"f305ff39-6a36-4ec6-b856-a147992d05d4\" (UID: \"f305ff39-6a36-4ec6-b856-a147992d05d4\") " Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.610714 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5" (OuterVolumeSpecName: "kube-api-access-8cfc5") pod "c127b084-aeb8-43be-8e37-9b42b615865e" (UID: "c127b084-aeb8-43be-8e37-9b42b615865e"). InnerVolumeSpecName "kube-api-access-8cfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.611331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config" (OuterVolumeSpecName: "config") pod "f305ff39-6a36-4ec6-b856-a147992d05d4" (UID: "f305ff39-6a36-4ec6-b856-a147992d05d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.611883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config" (OuterVolumeSpecName: "config") pod "c127b084-aeb8-43be-8e37-9b42b615865e" (UID: "c127b084-aeb8-43be-8e37-9b42b615865e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.614333 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c127b084-aeb8-43be-8e37-9b42b615865e" (UID: "c127b084-aeb8-43be-8e37-9b42b615865e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.633972 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp" (OuterVolumeSpecName: "kube-api-access-wbdgp") pod "f305ff39-6a36-4ec6-b856-a147992d05d4" (UID: "f305ff39-6a36-4ec6-b856-a147992d05d4"). InnerVolumeSpecName "kube-api-access-wbdgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.703803 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cfc5\" (UniqueName: \"kubernetes.io/projected/c127b084-aeb8-43be-8e37-9b42b615865e-kube-api-access-8cfc5\") on node \"crc\" DevicePath \"\"" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.703848 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbdgp\" (UniqueName: \"kubernetes.io/projected/f305ff39-6a36-4ec6-b856-a147992d05d4-kube-api-access-wbdgp\") on node \"crc\" DevicePath \"\"" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.703863 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.703876 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c127b084-aeb8-43be-8e37-9b42b615865e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.703887 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f305ff39-6a36-4ec6-b856-a147992d05d4-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.816552 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" event={"ID":"c127b084-aeb8-43be-8e37-9b42b615865e","Type":"ContainerDied","Data":"f785f10df9301656f6e8fb2abfba24ee36ef86a85c7d6ed48733866cd7864928"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.816729 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7dg6v" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.827385 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s46ps" event={"ID":"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8","Type":"ContainerStarted","Data":"65923627b878995488c5c3396f8d1b6a4fd8e4b9c712bcbb0729f287827a1c9a"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.828445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4c01c8a-b0df-43ab-9097-d619e00981d2","Type":"ContainerStarted","Data":"1758e4f185c20a49d0a5df3db2d68e5f37a89a89af175f56e071bc70c075bd6d"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.829785 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" event={"ID":"f305ff39-6a36-4ec6-b856-a147992d05d4","Type":"ContainerDied","Data":"134dc089e4901b6300413b29a8ae2912167922efeafd36559ff8aa63a5967225"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.829877 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bqjt6" Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.837059 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerStarted","Data":"d2251c4c8cea65ebeaea46d31d5c2bea7c46e855105bb6a6016193f4f0a974a5"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.838799 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerStarted","Data":"6e0af39e3db5bafcb21325602fdcc8df9f21ec9c7c2302bcb8fa57b4ae50a7df"} Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.935092 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.943715 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7dg6v"] Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.980543 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:45 crc kubenswrapper[4812]: I0218 16:48:45.988806 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bqjt6"] Feb 18 16:48:46 crc kubenswrapper[4812]: I0218 16:48:46.533865 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c127b084-aeb8-43be-8e37-9b42b615865e" path="/var/lib/kubelet/pods/c127b084-aeb8-43be-8e37-9b42b615865e/volumes" Feb 18 16:48:46 crc kubenswrapper[4812]: I0218 16:48:46.535007 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f305ff39-6a36-4ec6-b856-a147992d05d4" path="/var/lib/kubelet/pods/f305ff39-6a36-4ec6-b856-a147992d05d4/volumes" Feb 18 16:48:49 crc kubenswrapper[4812]: I0218 16:48:49.875517 4812 generic.go:334] "Generic (PLEG): container finished" podID="3d1c27f6-1144-40ce-a66c-a2c1fb4aa128" containerID="56eeaa96e1bed3c8bddcc8febaea278dd37176ad1e82917ef3fc31b6cde6f17a" exitCode=0 Feb 18 16:48:49 crc kubenswrapper[4812]: I0218 16:48:49.875607 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128","Type":"ContainerDied","Data":"56eeaa96e1bed3c8bddcc8febaea278dd37176ad1e82917ef3fc31b6cde6f17a"} Feb 18 16:48:49 crc kubenswrapper[4812]: I0218 16:48:49.881234 4812 generic.go:334] "Generic (PLEG): container finished" podID="a1c50cf6-624a-4342-bc66-3a0789879e55" containerID="772a36280ebdbcdd3f3024e5599b8d09310803351ec5424be415fee53d8b7de5" exitCode=0 Feb 18 16:48:49 crc kubenswrapper[4812]: I0218 16:48:49.881277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a1c50cf6-624a-4342-bc66-3a0789879e55","Type":"ContainerDied","Data":"772a36280ebdbcdd3f3024e5599b8d09310803351ec5424be415fee53d8b7de5"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.000393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e21653f2-3333-4f74-b1c7-3d34c6ab4280","Type":"ContainerStarted","Data":"32d12d49c61ec9bfe080058bbac2f22f0e1a294de95888a49cfbd06ac761e790"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.004730 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.026851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d1c27f6-1144-40ce-a66c-a2c1fb4aa128","Type":"ContainerStarted","Data":"9a642468f9ff7941b480f06e79da12f75366605d239904a8cb482acf62c9d7b2"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.038060 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.443998064 podStartE2EDuration="42.038041387s" podCreationTimestamp="2026-02-18 16:48:21 +0000 UTC" firstStartedPulling="2026-02-18 16:48:22.567345386 +0000 UTC m=+1122.832956295" lastFinishedPulling="2026-02-18 16:49:02.161388709 +0000 UTC m=+1162.426999618" observedRunningTime="2026-02-18 16:49:03.033852331 +0000 UTC m=+1163.299463240" watchObservedRunningTime="2026-02-18 16:49:03.038041387 +0000 UTC m=+1163.303652296" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.039614 4812 generic.go:334] "Generic (PLEG): container finished" podID="52313df7-7636-4aa8-a55e-8520ae930395" containerID="9470ec5635c7ef3aa2a060efd502c21b8732a65a892ca59bad903f37821faff1" exitCode=0 Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.040206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" event={"ID":"52313df7-7636-4aa8-a55e-8520ae930395","Type":"ContainerDied","Data":"9470ec5635c7ef3aa2a060efd502c21b8732a65a892ca59bad903f37821faff1"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.079395 4812 generic.go:334] "Generic (PLEG): container finished" podID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerID="e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5" exitCode=0 Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.079507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" event={"ID":"2f876dc8-3a88-458c-9a0c-704963d7a1c7","Type":"ContainerDied","Data":"e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.084152 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z" event={"ID":"2a2e707c-718f-4f17-9b77-c883f7e9d9f3","Type":"ContainerStarted","Data":"721e6a1b03c7d23e8b43fd6b244cde1fde3ae83ed0b7ec6256b44f03497cf3a4"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.084748 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n9n6z" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.098587 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s46ps" event={"ID":"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8","Type":"ContainerStarted","Data":"2e6c7a29ced531cac0f2d52d70fbdcf7c350ef3b3c618f18ba40bc4a77edf366"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.103126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbfaf09-13f2-49f8-8f32-5b149c8c69be","Type":"ContainerStarted","Data":"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.103364 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.112026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4c01c8a-b0df-43ab-9097-d619e00981d2","Type":"ContainerStarted","Data":"03f18e71b5b56e5f4bc0cad7339ca1ea81144055d9e4e68967737975c8e496da"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.117346 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.598986782 podStartE2EDuration="43.117276222s" podCreationTimestamp="2026-02-18 16:48:20 +0000 UTC" firstStartedPulling="2026-02-18 16:48:22.492155844 +0000 UTC m=+1122.757766753" lastFinishedPulling="2026-02-18 16:48:44.010445284 +0000 UTC m=+1144.276056193" observedRunningTime="2026-02-18 16:49:03.078943467 +0000 UTC m=+1163.344554376" watchObservedRunningTime="2026-02-18 16:49:03.117276222 +0000 UTC m=+1163.382887121" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.131461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6943b9-4519-4dc3-9be1-96aa9eedcfda","Type":"ContainerStarted","Data":"e50ac151a98ebe1d8b704a76099a4f03e7b679e94081384607f41ac398fd0e05"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.148242 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a1c50cf6-624a-4342-bc66-3a0789879e55","Type":"ContainerStarted","Data":"a6b05b86ff8bb9f9d1f70b949fc2a91d3ab46c2f441c1f3f7e8f5d3c0a7c3a4d"} Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.185974 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.745570977 podStartE2EDuration="40.185952389s" podCreationTimestamp="2026-02-18 16:48:23 +0000 UTC" firstStartedPulling="2026-02-18 16:48:44.522885758 +0000 UTC m=+1144.788496667" lastFinishedPulling="2026-02-18 16:49:01.96326717 +0000 UTC m=+1162.228878079" observedRunningTime="2026-02-18 16:49:03.167185462 +0000 UTC m=+1163.432796381" watchObservedRunningTime="2026-02-18 16:49:03.185952389 +0000 UTC m=+1163.451563308" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.230284 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n9n6z" podStartSLOduration=19.697119314 podStartE2EDuration="37.230257686s" podCreationTimestamp="2026-02-18 16:48:26 +0000 UTC" firstStartedPulling="2026-02-18 16:48:44.390631924 +0000 UTC m=+1144.656242833" lastFinishedPulling="2026-02-18 16:49:01.923770296 +0000 UTC m=+1162.189381205" observedRunningTime="2026-02-18 16:49:03.20486717 +0000 UTC m=+1163.470478079" watchObservedRunningTime="2026-02-18 16:49:03.230257686 +0000 UTC m=+1163.495868595" Feb 18 16:49:03 crc kubenswrapper[4812]: I0218 16:49:03.242606 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.794247724999998 podStartE2EDuration="44.242341524s" podCreationTimestamp="2026-02-18 16:48:19 +0000 UTC" firstStartedPulling="2026-02-18 16:48:21.769628577 +0000 UTC m=+1122.035239486" lastFinishedPulling="2026-02-18 16:48:44.217722376 +0000 UTC m=+1144.483333285" observedRunningTime="2026-02-18 16:49:03.233507539 +0000 UTC m=+1163.499118468" watchObservedRunningTime="2026-02-18 16:49:03.242341524 +0000 UTC m=+1163.507952433" Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.159166 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" event={"ID":"52313df7-7636-4aa8-a55e-8520ae930395","Type":"ContainerStarted","Data":"d6b23ce88fc8fc6574ebc493bceaca2e191909f2dc0ff7071f3edf60847b5bf1"} Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.161537 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.162255 4812 generic.go:334] "Generic (PLEG): container finished" podID="8a06b1c0-26fd-448a-ba31-9b6ff58ebab8" containerID="2e6c7a29ced531cac0f2d52d70fbdcf7c350ef3b3c618f18ba40bc4a77edf366" exitCode=0 Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.162339 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s46ps" event={"ID":"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8","Type":"ContainerDied","Data":"2e6c7a29ced531cac0f2d52d70fbdcf7c350ef3b3c618f18ba40bc4a77edf366"} Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.162378 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s46ps" event={"ID":"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8","Type":"ContainerStarted","Data":"7a8e35c9e8c2308275b1c2fed0a4fbf3aaba52fafe3ebcd7e57759ae146b8a53"} Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.165498 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" event={"ID":"2f876dc8-3a88-458c-9a0c-704963d7a1c7","Type":"ContainerStarted","Data":"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c"} Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.165951 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.184865 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" podStartSLOduration=3.907286373 podStartE2EDuration="47.184842776s" podCreationTimestamp="2026-02-18 16:48:17 +0000 UTC" firstStartedPulling="2026-02-18 16:48:18.925894516 +0000 UTC m=+1119.191505425" lastFinishedPulling="2026-02-18 16:49:02.203450919 +0000 UTC m=+1162.469061828" observedRunningTime="2026-02-18 16:49:04.175436047 +0000 UTC m=+1164.441046976" watchObservedRunningTime="2026-02-18 16:49:04.184842776 +0000 UTC m=+1164.450453685" Feb 18 16:49:04 crc kubenswrapper[4812]: I0218 16:49:04.200484 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" podStartSLOduration=3.61465411 podStartE2EDuration="47.200460203s" podCreationTimestamp="2026-02-18 16:48:17 +0000 UTC" firstStartedPulling="2026-02-18 16:48:18.594140118 +0000 UTC m=+1118.859751027" lastFinishedPulling="2026-02-18 16:49:02.179946211 +0000 UTC m=+1162.445557120" observedRunningTime="2026-02-18 16:49:04.199771956 +0000 UTC m=+1164.465382905" watchObservedRunningTime="2026-02-18 16:49:04.200460203 +0000 UTC m=+1164.466071102" Feb 18 16:49:05 crc kubenswrapper[4812]: I0218 16:49:05.174948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerStarted","Data":"778bc9fb9cd4276fca153fd0e8737437821f6a315fdfae2166fbc1278a9581ee"} Feb 18 16:49:06 crc kubenswrapper[4812]: I0218 16:49:06.188382 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s46ps" event={"ID":"8a06b1c0-26fd-448a-ba31-9b6ff58ebab8","Type":"ContainerStarted","Data":"b703a58bec90dea3feb644ecbf191542dd57bd25f223b908657fc76086af33e3"} Feb 18 16:49:06 crc kubenswrapper[4812]: I0218 16:49:06.188592 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:49:06 crc kubenswrapper[4812]: I0218 16:49:06.188612 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:49:06 crc kubenswrapper[4812]: I0218 16:49:06.211299 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-s46ps" podStartSLOduration=34.162255423 podStartE2EDuration="40.211274887s" podCreationTimestamp="2026-02-18 16:48:26 +0000 UTC" firstStartedPulling="2026-02-18 16:48:44.947372915 +0000 UTC m=+1145.212983824" lastFinishedPulling="2026-02-18 16:48:50.996392379 +0000 UTC m=+1151.262003288" observedRunningTime="2026-02-18 16:49:06.208377823 +0000 UTC m=+1166.473988742" watchObservedRunningTime="2026-02-18 16:49:06.211274887 +0000 UTC m=+1166.476885796" Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.672927 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.674603 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6943b9-4519-4dc3-9be1-96aa9eedcfda","Type":"ContainerStarted","Data":"82ce0b8be93089c2a9dcf6c2e144554f5865fa528d343da6f8315e44b23d6923"} Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.675936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4c01c8a-b0df-43ab-9097-d619e00981d2","Type":"ContainerStarted","Data":"5e470059ce016da91e4c9860f40917e79c432e0ed3bef53cc0120c5df1203218"} Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.724935 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.777656477 podStartE2EDuration="42.72491413s" podCreationTimestamp="2026-02-18 16:48:26 +0000 UTC" firstStartedPulling="2026-02-18 16:48:44.700357982 +0000 UTC m=+1144.965968901" lastFinishedPulling="2026-02-18 16:49:06.647615645 +0000 UTC m=+1166.913226554" observedRunningTime="2026-02-18 16:49:08.71820025 +0000 UTC m=+1168.983811149" watchObservedRunningTime="2026-02-18 16:49:08.72491413 +0000 UTC m=+1168.990525039" Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.753142 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.764941 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.558133339 podStartE2EDuration="38.764914488s" podCreationTimestamp="2026-02-18 16:48:30 +0000 UTC" firstStartedPulling="2026-02-18 16:48:45.564628765 +0000 UTC m=+1145.830239674" lastFinishedPulling="2026-02-18 16:49:06.771409914 +0000 UTC m=+1167.037020823" observedRunningTime="2026-02-18 16:49:08.754843652 +0000 UTC m=+1169.020454561" watchObservedRunningTime="2026-02-18 16:49:08.764914488 +0000 UTC m=+1169.030525397" Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.767296 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="dnsmasq-dns" containerID="cri-o://9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c" gracePeriod=10 Feb 18 16:49:08 crc kubenswrapper[4812]: I0218 16:49:08.772583 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.224866 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.378080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc\") pod \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.378251 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config\") pod \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.378295 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvlc4\" (UniqueName: \"kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4\") pod \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\" (UID: \"2f876dc8-3a88-458c-9a0c-704963d7a1c7\") " Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.383220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4" (OuterVolumeSpecName: "kube-api-access-rvlc4") pod "2f876dc8-3a88-458c-9a0c-704963d7a1c7" (UID: "2f876dc8-3a88-458c-9a0c-704963d7a1c7"). InnerVolumeSpecName "kube-api-access-rvlc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.416568 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f876dc8-3a88-458c-9a0c-704963d7a1c7" (UID: "2f876dc8-3a88-458c-9a0c-704963d7a1c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.420438 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config" (OuterVolumeSpecName: "config") pod "2f876dc8-3a88-458c-9a0c-704963d7a1c7" (UID: "2f876dc8-3a88-458c-9a0c-704963d7a1c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.482646 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.482693 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvlc4\" (UniqueName: \"kubernetes.io/projected/2f876dc8-3a88-458c-9a0c-704963d7a1c7-kube-api-access-rvlc4\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.482709 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f876dc8-3a88-458c-9a0c-704963d7a1c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.672677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.717610 4812 generic.go:334] "Generic (PLEG): container finished" podID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerID="9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c" exitCode=0 Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.717700 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.717745 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" event={"ID":"2f876dc8-3a88-458c-9a0c-704963d7a1c7","Type":"ContainerDied","Data":"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c"} Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.717782 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5k6hk" event={"ID":"2f876dc8-3a88-458c-9a0c-704963d7a1c7","Type":"ContainerDied","Data":"870cc3cac83c168c5c25d014b49498beb04966311bb528720bc1ab3a5de0a218"} Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.717798 4812 scope.go:117] "RemoveContainer" containerID="9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.763898 4812 scope.go:117] "RemoveContainer" containerID="e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.766921 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.783622 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5k6hk"] Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.801275 4812 scope.go:117] "RemoveContainer" containerID="9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c" Feb 18 16:49:09 crc kubenswrapper[4812]: E0218 16:49:09.804361 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c\": container with ID starting with 9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c not found: ID does not exist" containerID="9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.804405 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c"} err="failed to get container status \"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c\": rpc error: code = NotFound desc = could not find container \"9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c\": container with ID starting with 9f489e19d00d2c46dc13b6e4de811897e13b05849b64ce5f6573bafdf53ec42c not found: ID does not exist" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.804433 4812 scope.go:117] "RemoveContainer" containerID="e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5" Feb 18 16:49:09 crc kubenswrapper[4812]: E0218 16:49:09.807339 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5\": container with ID starting with e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5 not found: ID does not exist" containerID="e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.807387 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5"} err="failed to get container status \"e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5\": rpc error: code = NotFound desc = could not find container \"e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5\": container with ID starting with e00065c2cf608a16d1c4aeb9435eec854baee68b194f037c2e2d0430da9894a5 not found: ID does not exist" Feb 18 16:49:09 crc kubenswrapper[4812]: I0218 16:49:09.889853 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.517843 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" path="/var/lib/kubelet/pods/2f876dc8-3a88-458c-9a0c-704963d7a1c7/volumes" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.711199 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.726419 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.757207 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.764009 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.800373 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 16:49:10 crc kubenswrapper[4812]: I0218 16:49:10.800441 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.074731 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:11 crc kubenswrapper[4812]: E0218 16:49:11.075261 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="dnsmasq-dns" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.075283 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="dnsmasq-dns" Feb 18 16:49:11 crc kubenswrapper[4812]: E0218 16:49:11.075301 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="init" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.075312 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="init" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.075531 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f876dc8-3a88-458c-9a0c-704963d7a1c7" containerName="dnsmasq-dns" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.078067 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.081242 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.089545 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.181583 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-n59pt"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.182802 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.184742 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.197823 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n59pt"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.219354 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.219417 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.219538 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.219586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44jr\" (UniqueName: \"kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-config\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pskvl\" (UniqueName: \"kubernetes.io/projected/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-kube-api-access-pskvl\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322270 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44jr\" (UniqueName: \"kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovs-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322578 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322605 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovn-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.322833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-combined-ca-bundle\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.323904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.323921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.324596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.351702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44jr\" (UniqueName: \"kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr\") pod \"dnsmasq-dns-7fd796d7df-rmpfx\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.394293 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.416571 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovn-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-combined-ca-bundle\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-config\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pskvl\" (UniqueName: \"kubernetes.io/projected/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-kube-api-access-pskvl\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425309 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovs-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovs-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.425703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-ovn-rundir\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.427079 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-config\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.430711 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.440892 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.446559 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.449315 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-combined-ca-bundle\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.451144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pskvl\" (UniqueName: \"kubernetes.io/projected/9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4-kube-api-access-pskvl\") pod \"ovn-controller-metrics-n59pt\" (UID: \"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4\") " pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.458549 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.465456 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.500196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n59pt" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.531578 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.531809 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.531853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt77v\" (UniqueName: \"kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.531896 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.531932 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.633191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.633257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt77v\" (UniqueName: \"kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.633375 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.633417 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.633494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.634273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.635162 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.638208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.643038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.666189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt77v\" (UniqueName: \"kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v\") pod \"dnsmasq-dns-86db49b7ff-d2wgs\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.712053 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.740350 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerID="778bc9fb9cd4276fca153fd0e8737437821f6a315fdfae2166fbc1278a9581ee" exitCode=0 Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.740458 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerDied","Data":"778bc9fb9cd4276fca153fd0e8737437821f6a315fdfae2166fbc1278a9581ee"} Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.777731 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.824401 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.824531 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.910189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:11 crc kubenswrapper[4812]: I0218 16:49:11.939962 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.009505 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.151441 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n59pt"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.434233 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.605457 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.671985 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wvhjk"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.673106 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.709983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.710398 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68786\" (UniqueName: \"kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.714172 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wvhjk"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.776373 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-52dc-account-create-update-s4cjj"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.778201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.779508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n59pt" event={"ID":"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4","Type":"ContainerStarted","Data":"3339a585d691c7b8cc1361d56118521e7b665aff687cf6b544baa1e09a9acdec"} Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.782245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" event={"ID":"694c63f2-a2ca-4c2a-a89a-e43d52611749","Type":"ContainerStarted","Data":"7add6ad80bd9119ce7b47fdd557e4500676c0dd34625405ab81bafa1a22f1b13"} Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.783963 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.866559 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68786\" (UniqueName: \"kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.867018 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.870271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.905545 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-52dc-account-create-update-s4cjj"] Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.941157 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68786\" (UniqueName: \"kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786\") pod \"glance-db-create-wvhjk\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.969030 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22br7\" (UniqueName: \"kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:12 crc kubenswrapper[4812]: I0218 16:49:12.971776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.056056 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.062310 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.064495 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.069243 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.069427 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.069529 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.069636 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zn4zl" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.074941 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-config\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075194 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075216 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-scripts\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22br7\" (UniqueName: \"kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.075345 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.077551 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59gd7\" (UniqueName: \"kubernetes.io/projected/350af9df-062b-44ba-bac2-66417c4dfcef-kube-api-access-59gd7\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.077601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.077630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.078429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.123198 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22br7\" (UniqueName: \"kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7\") pod \"glance-52dc-account-create-update-s4cjj\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.171140 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.197211 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-config\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198325 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-scripts\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198417 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198837 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59gd7\" (UniqueName: \"kubernetes.io/projected/350af9df-062b-44ba-bac2-66417c4dfcef-kube-api-access-59gd7\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.198959 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.206185 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-config\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.207213 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.207825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/350af9df-062b-44ba-bac2-66417c4dfcef-scripts\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.213134 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.233766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.236323 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350af9df-062b-44ba-bac2-66417c4dfcef-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.236847 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.250533 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59gd7\" (UniqueName: \"kubernetes.io/projected/350af9df-062b-44ba-bac2-66417c4dfcef-kube-api-access-59gd7\") pod \"ovn-northd-0\" (UID: \"350af9df-062b-44ba-bac2-66417c4dfcef\") " pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.272149 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8fm48"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.291224 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.316526 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fm48"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.338533 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-70ea-account-create-update-z8dpj"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.339698 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-70ea-account-create-update-z8dpj"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.339778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.349474 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.378894 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nb6kk"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.380131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.411433 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.411606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbckx\" (UniqueName: \"kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.416081 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4v5\" (UniqueName: \"kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.416238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.416396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.416425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5g4\" (UniqueName: \"kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.419204 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nb6kk"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.446644 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5g4\" (UniqueName: \"kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbckx\" (UniqueName: \"kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518430 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4v5\" (UniqueName: \"kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.518471 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.519892 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.520852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.521039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.559383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5g4\" (UniqueName: \"kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4\") pod \"placement-db-create-nb6kk\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.567309 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4v5\" (UniqueName: \"kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5\") pod \"keystone-70ea-account-create-update-z8dpj\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.568336 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbckx\" (UniqueName: \"kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx\") pod \"keystone-db-create-8fm48\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.599263 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a8b1-account-create-update-785ct"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.610268 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.619342 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.621209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4fc\" (UniqueName: \"kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.621320 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.652867 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.656773 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a8b1-account-create-update-785ct"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.690826 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.710891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.712295 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wvhjk"] Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.723002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4fc\" (UniqueName: \"kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.724608 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.725896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: W0218 16:49:13.745697 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42be7b4_a0bc_40c3_b297_1259ce32e320.slice/crio-6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09 WatchSource:0}: Error finding container 6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09: Status 404 returned error can't find the container with id 6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09 Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.757893 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4fc\" (UniqueName: \"kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc\") pod \"placement-a8b1-account-create-update-785ct\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.815119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerStarted","Data":"48a6cd0ea3c764c851cb28b5663a9ef4a81f4b01245b4836557365d673780af6"} Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.819242 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wvhjk" event={"ID":"d42be7b4-a0bc-40c3-b297-1259ce32e320","Type":"ContainerStarted","Data":"6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09"} Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.822542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n59pt" event={"ID":"9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4","Type":"ContainerStarted","Data":"c5349e7aa09a9f55643f2658a72a904fb7abbf548e438f4109efb963ef2f8878"} Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.830437 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" podUID="694c63f2-a2ca-4c2a-a89a-e43d52611749" containerName="init" containerID="cri-o://1f1658ecc197fe8938a7748a680dfa33a2a87e17026939bc918118b909faf45e" gracePeriod=10 Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.830877 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" event={"ID":"694c63f2-a2ca-4c2a-a89a-e43d52611749","Type":"ContainerStarted","Data":"1f1658ecc197fe8938a7748a680dfa33a2a87e17026939bc918118b909faf45e"} Feb 18 16:49:13 crc kubenswrapper[4812]: I0218 16:49:13.848416 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-n59pt" podStartSLOduration=2.8483902949999997 podStartE2EDuration="2.848390295s" podCreationTimestamp="2026-02-18 16:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:13.841533331 +0000 UTC m=+1174.107144240" watchObservedRunningTime="2026-02-18 16:49:13.848390295 +0000 UTC m=+1174.114001204" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.041461 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.368495 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-52dc-account-create-update-s4cjj"] Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.416384 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-cwvf6"] Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.417651 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.474497 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-addf-account-create-update-ltxgg"] Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.475777 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.487856 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.495956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnl9\" (UniqueName: \"kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.496022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.496132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgrw8\" (UniqueName: \"kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.496183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.496512 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.598297 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgrw8\" (UniqueName: \"kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.598380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.598470 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnl9\" (UniqueName: \"kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.598515 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.604401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.605470 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.633706 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgrw8\" (UniqueName: \"kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8\") pod \"watcher-db-create-cwvf6\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.631575 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnl9\" (UniqueName: \"kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9\") pod \"watcher-addf-account-create-update-ltxgg\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.830942 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:14 crc kubenswrapper[4812]: I0218 16:49:14.852638 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047840 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-addf-account-create-update-ltxgg"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047898 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-cwvf6"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047913 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047926 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fm48"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047946 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-70ea-account-create-update-z8dpj"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047965 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.047987 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.079526 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.080918 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.091550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-52dc-account-create-update-s4cjj" event={"ID":"95504d5b-50f1-436c-a2fe-21835f70912e","Type":"ContainerStarted","Data":"f77d4c50b7642dcd5d3d3417f30b5eeac9ac2dce90ba56ec9212d8a7b3709480"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.145068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"350af9df-062b-44ba-bac2-66417c4dfcef","Type":"ContainerStarted","Data":"2124b2a18bf4d8ed4b849ea7f1ca5e3dfef769f636d920d19af03764ce61e721"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.146840 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nb6kk"] Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.221352 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.221810 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl528\" (UniqueName: \"kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.221861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.221889 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.221989 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.254427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wvhjk" event={"ID":"d42be7b4-a0bc-40c3-b297-1259ce32e320","Type":"ContainerStarted","Data":"f392d4ed8aacf89d1d3efe6b34c16a0fc868cfb2b0f2281388d13c544c4cfcbd"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.301819 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-wvhjk" podStartSLOduration=3.301776052 podStartE2EDuration="3.301776052s" podCreationTimestamp="2026-02-18 16:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:15.286402071 +0000 UTC m=+1175.552012980" watchObservedRunningTime="2026-02-18 16:49:15.301776052 +0000 UTC m=+1175.567386961" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.311747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fm48" event={"ID":"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb","Type":"ContainerStarted","Data":"4ec29fac52d0768f88436148d53d33c0538c936659b248a65fd6cd97973d8d6b"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.324471 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.324553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.324588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl528\" (UniqueName: \"kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.324615 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.324634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.325406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.325906 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.325956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.326499 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.328516 4812 generic.go:334] "Generic (PLEG): container finished" podID="694c63f2-a2ca-4c2a-a89a-e43d52611749" containerID="1f1658ecc197fe8938a7748a680dfa33a2a87e17026939bc918118b909faf45e" exitCode=0 Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.328579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" event={"ID":"694c63f2-a2ca-4c2a-a89a-e43d52611749","Type":"ContainerDied","Data":"1f1658ecc197fe8938a7748a680dfa33a2a87e17026939bc918118b909faf45e"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.402413 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl528\" (UniqueName: \"kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528\") pod \"dnsmasq-dns-698758b865-55brp\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.412456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerStarted","Data":"d13b92d1a53bf37c18bbbe1fd384ced97ff26bc9c8f1a393167c208aa8aeaa90"} Feb 18 16:49:15 crc kubenswrapper[4812]: I0218 16:49:15.550006 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.005708 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a8b1-account-create-update-785ct"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.009315 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.062000 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb\") pod \"694c63f2-a2ca-4c2a-a89a-e43d52611749\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.062128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44jr\" (UniqueName: \"kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr\") pod \"694c63f2-a2ca-4c2a-a89a-e43d52611749\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.062159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc\") pod \"694c63f2-a2ca-4c2a-a89a-e43d52611749\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.062186 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config\") pod \"694c63f2-a2ca-4c2a-a89a-e43d52611749\" (UID: \"694c63f2-a2ca-4c2a-a89a-e43d52611749\") " Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.070684 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr" (OuterVolumeSpecName: "kube-api-access-v44jr") pod "694c63f2-a2ca-4c2a-a89a-e43d52611749" (UID: "694c63f2-a2ca-4c2a-a89a-e43d52611749"). InnerVolumeSpecName "kube-api-access-v44jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.088426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "694c63f2-a2ca-4c2a-a89a-e43d52611749" (UID: "694c63f2-a2ca-4c2a-a89a-e43d52611749"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.093984 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config" (OuterVolumeSpecName: "config") pod "694c63f2-a2ca-4c2a-a89a-e43d52611749" (UID: "694c63f2-a2ca-4c2a-a89a-e43d52611749"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.094206 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "694c63f2-a2ca-4c2a-a89a-e43d52611749" (UID: "694c63f2-a2ca-4c2a-a89a-e43d52611749"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.164475 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.164521 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44jr\" (UniqueName: \"kubernetes.io/projected/694c63f2-a2ca-4c2a-a89a-e43d52611749-kube-api-access-v44jr\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.164537 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.164550 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694c63f2-a2ca-4c2a-a89a-e43d52611749-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.271794 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.272221 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694c63f2-a2ca-4c2a-a89a-e43d52611749" containerName="init" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.272243 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="694c63f2-a2ca-4c2a-a89a-e43d52611749" containerName="init" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.272786 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="694c63f2-a2ca-4c2a-a89a-e43d52611749" containerName="init" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.278939 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.287317 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.287393 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.287658 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.288876 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kwt9v" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.295983 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.339832 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-addf-account-create-update-ltxgg"] Feb 18 16:49:16 crc kubenswrapper[4812]: W0218 16:49:16.340501 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdda65b0_3132_4e95_a32e_d5772f7f1354.slice/crio-1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff WatchSource:0}: Error finding container 1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff: Status 404 returned error can't find the container with id 1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.359464 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-cwvf6"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367559 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795346dc-bc66-461a-bb9e-64991ac27a50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-cache\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-lock\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.367858 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgztp\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-kube-api-access-qgztp\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.443295 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" event={"ID":"694c63f2-a2ca-4c2a-a89a-e43d52611749","Type":"ContainerDied","Data":"7add6ad80bd9119ce7b47fdd557e4500676c0dd34625405ab81bafa1a22f1b13"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.443355 4812 scope.go:117] "RemoveContainer" containerID="1f1658ecc197fe8938a7748a680dfa33a2a87e17026939bc918118b909faf45e" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.443496 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rmpfx" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.452663 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.461909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-70ea-account-create-update-z8dpj" event={"ID":"bced8af6-aca7-4de1-96a8-40c4c31a8168","Type":"ContainerStarted","Data":"fe231f3c4dd2daadda73bd2a0a118066864b54226df0ca2a8ad83c6af091908f"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.469990 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.470200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.470230 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-cache\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.470247 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-lock\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.470269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgztp\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-kube-api-access-qgztp\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.470296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795346dc-bc66-461a-bb9e-64991ac27a50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.477641 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795346dc-bc66-461a-bb9e-64991ac27a50-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.479176 4812 generic.go:334] "Generic (PLEG): container finished" podID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerID="d13b92d1a53bf37c18bbbe1fd384ced97ff26bc9c8f1a393167c208aa8aeaa90" exitCode=0 Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.479250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerDied","Data":"d13b92d1a53bf37c18bbbe1fd384ced97ff26bc9c8f1a393167c208aa8aeaa90"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.480049 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.480137 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.480150 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.480188 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:16.980172253 +0000 UTC m=+1177.245783162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.480710 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-cache\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.480807 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a8b1-account-create-update-785ct" event={"ID":"93bd63e5-c276-43f0-8650-bf74a32c7e7f","Type":"ContainerStarted","Data":"44b8345871c7249f6c296c540c54901e88e4f953f3e13d9aabf5c1c9ce3d7490"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.481045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/795346dc-bc66-461a-bb9e-64991ac27a50-lock\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.481634 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-addf-account-create-update-ltxgg" event={"ID":"bdda65b0-3132-4e95-a32e-d5772f7f1354","Type":"ContainerStarted","Data":"1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.482365 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-cwvf6" event={"ID":"95955afd-adc9-44b0-93ba-4e4a63292613","Type":"ContainerStarted","Data":"b1d70aecdd3bbb502e232146bbd0cbabae4320cdcc464f9872681bf6d833c061"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.483739 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nb6kk" event={"ID":"d1406358-077f-4147-9645-a0492308800c","Type":"ContainerStarted","Data":"bbaa9e16e0f7f3965482d9076e3ea17e53efbe6b108aa698316e7f90d715d46d"} Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.505343 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.519498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgztp\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-kube-api-access-qgztp\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.798179 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.807960 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rmpfx"] Feb 18 16:49:16 crc kubenswrapper[4812]: I0218 16:49:16.983413 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.983695 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.983723 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:16 crc kubenswrapper[4812]: E0218 16:49:16.983802 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:17.983778342 +0000 UTC m=+1178.249389251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.490983 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerStarted","Data":"7e79de5741095c068fb13fa3a43c6da43b43c080bc6a78b500e76b8d4ed96090"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.491340 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerStarted","Data":"af660a7a85deb68fcb6ef7c9e1799aa27025867732343117480bd200a4ccc4a6"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.492973 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-52dc-account-create-update-s4cjj" event={"ID":"95504d5b-50f1-436c-a2fe-21835f70912e","Type":"ContainerStarted","Data":"3107070e262ee24fba9777fd7c59e2932babf6fb5ff435a936fc7fb668d8889b"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.494257 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-cwvf6" event={"ID":"95955afd-adc9-44b0-93ba-4e4a63292613","Type":"ContainerStarted","Data":"6305bb6714308dd574764743da30b170bc4f215cb7484c891ea63f70d587c7c2"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.495609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a8b1-account-create-update-785ct" event={"ID":"93bd63e5-c276-43f0-8650-bf74a32c7e7f","Type":"ContainerStarted","Data":"b821240bcd8ce78f3f661ba0304c3874a67ffe6291a5b8d2a13de148be49a7a6"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.497311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-addf-account-create-update-ltxgg" event={"ID":"bdda65b0-3132-4e95-a32e-d5772f7f1354","Type":"ContainerStarted","Data":"41b151232d544a4490fab42c933f3b203a98fa585d05bf18d3c177a3cc201723"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.498217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fm48" event={"ID":"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb","Type":"ContainerStarted","Data":"7e1a476320d946d03983a48bdfcb6820db6d788dd7ca4b12d685550a08a3339e"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.499372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nb6kk" event={"ID":"d1406358-077f-4147-9645-a0492308800c","Type":"ContainerStarted","Data":"76f5f5cf7144824215bb8ea652fef920830a808e9561f8ea3d472c5a7e3e2b06"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.505135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-70ea-account-create-update-z8dpj" event={"ID":"bced8af6-aca7-4de1-96a8-40c4c31a8168","Type":"ContainerStarted","Data":"37da6ef8c87cca73941fecd66b0ed1baaf0ad94244ea8a393850f3ae4e94ecde"} Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.548656 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nb6kk" podStartSLOduration=4.548626679 podStartE2EDuration="4.548626679s" podCreationTimestamp="2026-02-18 16:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:17.542266307 +0000 UTC m=+1177.807877226" watchObservedRunningTime="2026-02-18 16:49:17.548626679 +0000 UTC m=+1177.814237588" Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.562409 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-70ea-account-create-update-z8dpj" podStartSLOduration=4.562381479 podStartE2EDuration="4.562381479s" podCreationTimestamp="2026-02-18 16:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:17.56046044 +0000 UTC m=+1177.826071349" watchObservedRunningTime="2026-02-18 16:49:17.562381479 +0000 UTC m=+1177.827992388" Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.581536 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a8b1-account-create-update-785ct" podStartSLOduration=4.581517806 podStartE2EDuration="4.581517806s" podCreationTimestamp="2026-02-18 16:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:17.578574981 +0000 UTC m=+1177.844185910" watchObservedRunningTime="2026-02-18 16:49:17.581517806 +0000 UTC m=+1177.847128715" Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.600262 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-52dc-account-create-update-s4cjj" podStartSLOduration=5.600241362 podStartE2EDuration="5.600241362s" podCreationTimestamp="2026-02-18 16:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:17.597297417 +0000 UTC m=+1177.862908326" watchObservedRunningTime="2026-02-18 16:49:17.600241362 +0000 UTC m=+1177.865852271" Feb 18 16:49:17 crc kubenswrapper[4812]: I0218 16:49:17.620475 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-cwvf6" podStartSLOduration=3.620455836 podStartE2EDuration="3.620455836s" podCreationTimestamp="2026-02-18 16:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:17.61195564 +0000 UTC m=+1177.877566569" watchObservedRunningTime="2026-02-18 16:49:17.620455836 +0000 UTC m=+1177.886066735" Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.000935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:18 crc kubenswrapper[4812]: E0218 16:49:18.001580 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:18 crc kubenswrapper[4812]: E0218 16:49:18.001734 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:18 crc kubenswrapper[4812]: E0218 16:49:18.001856 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:20.001834187 +0000 UTC m=+1180.267445106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.516534 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="dnsmasq-dns" containerID="cri-o://6cc2927f2b1daf408336e683b604062bbade721294efc3f0ca860ed74f41d0b8" gracePeriod=10 Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.516831 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694c63f2-a2ca-4c2a-a89a-e43d52611749" path="/var/lib/kubelet/pods/694c63f2-a2ca-4c2a-a89a-e43d52611749/volumes" Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.517623 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.517645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerStarted","Data":"6cc2927f2b1daf408336e683b604062bbade721294efc3f0ca860ed74f41d0b8"} Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.519691 4812 generic.go:334] "Generic (PLEG): container finished" podID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerID="d2251c4c8cea65ebeaea46d31d5c2bea7c46e855105bb6a6016193f4f0a974a5" exitCode=0 Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.519827 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerDied","Data":"d2251c4c8cea65ebeaea46d31d5c2bea7c46e855105bb6a6016193f4f0a974a5"} Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.522787 4812 generic.go:334] "Generic (PLEG): container finished" podID="c83289ef-3638-4586-9801-be6e91d900d2" containerID="7e79de5741095c068fb13fa3a43c6da43b43c080bc6a78b500e76b8d4ed96090" exitCode=0 Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.522921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerDied","Data":"7e79de5741095c068fb13fa3a43c6da43b43c080bc6a78b500e76b8d4ed96090"} Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.526280 4812 generic.go:334] "Generic (PLEG): container finished" podID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerID="6e0af39e3db5bafcb21325602fdcc8df9f21ec9c7c2302bcb8fa57b4ae50a7df" exitCode=0 Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.527596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerDied","Data":"6e0af39e3db5bafcb21325602fdcc8df9f21ec9c7c2302bcb8fa57b4ae50a7df"} Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.576444 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" podStartSLOduration=7.576430171 podStartE2EDuration="7.576430171s" podCreationTimestamp="2026-02-18 16:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:18.550676286 +0000 UTC m=+1178.816287195" watchObservedRunningTime="2026-02-18 16:49:18.576430171 +0000 UTC m=+1178.842041080" Feb 18 16:49:18 crc kubenswrapper[4812]: I0218 16:49:18.609754 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-addf-account-create-update-ltxgg" podStartSLOduration=4.609731148 podStartE2EDuration="4.609731148s" podCreationTimestamp="2026-02-18 16:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:18.600036502 +0000 UTC m=+1178.865647411" watchObservedRunningTime="2026-02-18 16:49:18.609731148 +0000 UTC m=+1178.875342057" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.271380 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-8fm48" podStartSLOduration=6.271353244 podStartE2EDuration="6.271353244s" podCreationTimestamp="2026-02-18 16:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:19.267020774 +0000 UTC m=+1179.532631703" watchObservedRunningTime="2026-02-18 16:49:19.271353244 +0000 UTC m=+1179.536964173" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.418538 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kjfct"] Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.420401 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.423043 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.430892 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kjfct"] Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.591575 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.591640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhrq\" (UniqueName: \"kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.684953 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pfnnt"] Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.690342 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.694592 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.694598 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.695644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.695699 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhrq\" (UniqueName: \"kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.697229 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.697356 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.707452 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfnnt"] Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.739009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhrq\" (UniqueName: \"kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq\") pod \"root-account-create-update-kjfct\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.748696 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.797241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.797576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.797656 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.798031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.798068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zw7\" (UniqueName: \"kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.798123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.798184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901683 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901738 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84zw7\" (UniqueName: \"kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.901851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.902496 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.902736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.903541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.911733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.913573 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.913872 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:19 crc kubenswrapper[4812]: I0218 16:49:19.935705 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84zw7\" (UniqueName: \"kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7\") pod \"swift-ring-rebalance-pfnnt\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.005078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:20 crc kubenswrapper[4812]: E0218 16:49:20.005323 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:20 crc kubenswrapper[4812]: E0218 16:49:20.005905 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:20 crc kubenswrapper[4812]: E0218 16:49:20.005968 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:24.00594603 +0000 UTC m=+1184.271556939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.467814 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.531671 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kjfct"] Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.579319 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerStarted","Data":"ef790d1c7f46d4728ca5c66f52883b193cf403a2a870f0384109ddec0867e4af"} Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.584851 4812 generic.go:334] "Generic (PLEG): container finished" podID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerID="6cc2927f2b1daf408336e683b604062bbade721294efc3f0ca860ed74f41d0b8" exitCode=0 Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.584934 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerDied","Data":"6cc2927f2b1daf408336e683b604062bbade721294efc3f0ca860ed74f41d0b8"} Feb 18 16:49:20 crc kubenswrapper[4812]: I0218 16:49:20.589042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerStarted","Data":"5d4766015e413344722df453be9e416c24e2f4a4e1aac3c86059f18003f71920"} Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.176392 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pfnnt"] Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.602068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kjfct" event={"ID":"995feecd-1bbb-4fdb-b368-36f87084d6e5","Type":"ContainerStarted","Data":"be4c69fb1c4e682a23f660d7e18f3ddd22767b7ae6366bef1d6c4291717991e9"} Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.605804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerStarted","Data":"d14b0fdb25b8c50a15d75eb0e8d43639a0ff0d78b52bdb7079d14cab7a8a33f5"} Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.607355 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfnnt" event={"ID":"430cd891-febe-45a3-9d5d-97b3933ab503","Type":"ContainerStarted","Data":"f5915bf0f48748dc68a1eefd7103abcb91272a90376d863446f2a94a5eec0bd1"} Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.607526 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:49:21 crc kubenswrapper[4812]: I0218 16:49:21.638667 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.756125614 podStartE2EDuration="1m3.638648008s" podCreationTimestamp="2026-02-18 16:48:18 +0000 UTC" firstStartedPulling="2026-02-18 16:48:20.282253136 +0000 UTC m=+1120.547864045" lastFinishedPulling="2026-02-18 16:48:44.16477553 +0000 UTC m=+1144.430386439" observedRunningTime="2026-02-18 16:49:21.635544729 +0000 UTC m=+1181.901155658" watchObservedRunningTime="2026-02-18 16:49:21.638648008 +0000 UTC m=+1181.904258927" Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.539563 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.617778 4812 generic.go:334] "Generic (PLEG): container finished" podID="d42be7b4-a0bc-40c3-b297-1259ce32e320" containerID="f392d4ed8aacf89d1d3efe6b34c16a0fc868cfb2b0f2281388d13c544c4cfcbd" exitCode=0 Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.617897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wvhjk" event={"ID":"d42be7b4-a0bc-40c3-b297-1259ce32e320","Type":"ContainerDied","Data":"f392d4ed8aacf89d1d3efe6b34c16a0fc868cfb2b0f2281388d13c544c4cfcbd"} Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.622078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kjfct" event={"ID":"995feecd-1bbb-4fdb-b368-36f87084d6e5","Type":"ContainerStarted","Data":"29714f3b627c994c2c22ba9d58fea0f2b3c7af25998354c25d01b5996ebf1046"} Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.622496 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.664706 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-55brp" podStartSLOduration=8.664606403 podStartE2EDuration="8.664606403s" podCreationTimestamp="2026-02-18 16:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:22.662613172 +0000 UTC m=+1182.928224081" watchObservedRunningTime="2026-02-18 16:49:22.664606403 +0000 UTC m=+1182.930217312" Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.685123 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="3d1c27f6-1144-40ce-a66c-a2c1fb4aa128" containerName="galera" probeResult="failure" output=< Feb 18 16:49:22 crc kubenswrapper[4812]: wsrep_local_state_comment (Joined) differs from Synced Feb 18 16:49:22 crc kubenswrapper[4812]: > Feb 18 16:49:22 crc kubenswrapper[4812]: I0218 16:49:22.707740 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.654306669 podStartE2EDuration="1m5.707712179s" podCreationTimestamp="2026-02-18 16:48:17 +0000 UTC" firstStartedPulling="2026-02-18 16:48:19.754720808 +0000 UTC m=+1120.020331717" lastFinishedPulling="2026-02-18 16:48:43.808126308 +0000 UTC m=+1144.073737227" observedRunningTime="2026-02-18 16:49:22.694725179 +0000 UTC m=+1182.960336088" watchObservedRunningTime="2026-02-18 16:49:22.707712179 +0000 UTC m=+1182.973323088" Feb 18 16:49:23 crc kubenswrapper[4812]: I0218 16:49:23.643339 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kjfct" podStartSLOduration=4.643320845 podStartE2EDuration="4.643320845s" podCreationTimestamp="2026-02-18 16:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:23.641421917 +0000 UTC m=+1183.907032836" watchObservedRunningTime="2026-02-18 16:49:23.643320845 +0000 UTC m=+1183.908931764" Feb 18 16:49:24 crc kubenswrapper[4812]: I0218 16:49:24.058678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:24 crc kubenswrapper[4812]: E0218 16:49:24.058862 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:24 crc kubenswrapper[4812]: E0218 16:49:24.058875 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:24 crc kubenswrapper[4812]: E0218 16:49:24.058925 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:32.058909476 +0000 UTC m=+1192.324520385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.071294 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.076024 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204256 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config\") pod \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204338 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt77v\" (UniqueName: \"kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v\") pod \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204475 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc\") pod \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204515 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb\") pod \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb\") pod \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\" (UID: \"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204702 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts\") pod \"d42be7b4-a0bc-40c3-b297-1259ce32e320\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.204754 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68786\" (UniqueName: \"kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786\") pod \"d42be7b4-a0bc-40c3-b297-1259ce32e320\" (UID: \"d42be7b4-a0bc-40c3-b297-1259ce32e320\") " Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.205978 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d42be7b4-a0bc-40c3-b297-1259ce32e320" (UID: "d42be7b4-a0bc-40c3-b297-1259ce32e320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.306862 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42be7b4-a0bc-40c3-b297-1259ce32e320-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.545333 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v" (OuterVolumeSpecName: "kube-api-access-qt77v") pod "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" (UID: "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8"). InnerVolumeSpecName "kube-api-access-qt77v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.545423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786" (OuterVolumeSpecName: "kube-api-access-68786") pod "d42be7b4-a0bc-40c3-b297-1259ce32e320" (UID: "d42be7b4-a0bc-40c3-b297-1259ce32e320"). InnerVolumeSpecName "kube-api-access-68786". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.564426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" (UID: "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.574445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config" (OuterVolumeSpecName: "config") pod "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" (UID: "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.575398 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" (UID: "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.576333 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" (UID: "74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612280 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68786\" (UniqueName: \"kubernetes.io/projected/d42be7b4-a0bc-40c3-b297-1259ce32e320-kube-api-access-68786\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612321 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612334 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt77v\" (UniqueName: \"kubernetes.io/projected/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-kube-api-access-qt77v\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612347 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612358 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.612369 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.649664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" event={"ID":"74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8","Type":"ContainerDied","Data":"48a6cd0ea3c764c851cb28b5663a9ef4a81f4b01245b4836557365d673780af6"} Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.649735 4812 scope.go:117] "RemoveContainer" containerID="6cc2927f2b1daf408336e683b604062bbade721294efc3f0ca860ed74f41d0b8" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.649910 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d2wgs" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.652249 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wvhjk" event={"ID":"d42be7b4-a0bc-40c3-b297-1259ce32e320","Type":"ContainerDied","Data":"6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09"} Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.652284 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wvhjk" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.652301 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f847b35f02f272bfd22873b0124d3502d5b09c88d0d0c05f09be2ca518a8e09" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.691604 4812 scope.go:117] "RemoveContainer" containerID="d13b92d1a53bf37c18bbbe1fd384ced97ff26bc9c8f1a393167c208aa8aeaa90" Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.693846 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:25 crc kubenswrapper[4812]: I0218 16:49:25.702835 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d2wgs"] Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:26.519332 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" path="/var/lib/kubelet/pods/74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8/volumes" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:29.092121 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:29.094899 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:29.095391 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:29.423956 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:29.892439 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:29.895024 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58dh657h5dch548hc5h555h54dh587h5ddh55dh7fhdch68dh9bh5f9h544h5ffh649h646h57bh686hb8h7bh5dch8h5cbh5c7h64dhfhfh68h598q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n544h645h64bh88h6dh5f8h587h67fh77h687h66bh665hdh96h554h57ch5ch8h64dh9dh56dh54fh57bh664h6h68fh597h9ch68ch689h5f8h74q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n5b8h74h565h568hdh5f4h554h697hb7h564h5d4h56h59fh56bh64bhcdh8ch698h687h54fh5bh56fhb6h5bdh645h664h7dh5b7hcdh64bh5bfh559q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n89h645hbbh696hc4h59fh649h5cbh585h68fh5c9hdbh57bh67dh65fhc9h5fch65h67fh55h578h558h66bh5d6h5ddh57fh5f7h5d9hffh599h5c9h564q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59gd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(350af9df-062b-44ba-bac2-66417c4dfcef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:30.552348 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:30.682082 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:30.682322 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="dnsmasq-dns" containerID="cri-o://d6b23ce88fc8fc6574ebc493bceaca2e191909f2dc0ff7071f3edf60847b5bf1" gracePeriod=10 Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:31.723238 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="350af9df-062b-44ba-bac2-66417c4dfcef" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:31.738005 4812 generic.go:334] "Generic (PLEG): container finished" podID="52313df7-7636-4aa8-a55e-8520ae930395" containerID="d6b23ce88fc8fc6574ebc493bceaca2e191909f2dc0ff7071f3edf60847b5bf1" exitCode=0 Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:31.738075 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" event={"ID":"52313df7-7636-4aa8-a55e-8520ae930395","Type":"ContainerDied","Data":"d6b23ce88fc8fc6574ebc493bceaca2e191909f2dc0ff7071f3edf60847b5bf1"} Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:31.739555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"350af9df-062b-44ba-bac2-66417c4dfcef","Type":"ContainerStarted","Data":"a8ca0b8338b414881ab91246d605939558b6ce905b9e884cb32215ce542e4706"} Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:31.741135 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="350af9df-062b-44ba-bac2-66417c4dfcef" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:31.901034 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:32.138965 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:32.139215 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:32.139248 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:32.139331 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:49:48.139310488 +0000 UTC m=+1208.404921397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:32.254565 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n9n6z" podUID="2a2e707c-718f-4f17-9b77-c883f7e9d9f3" containerName="ovn-controller" probeResult="failure" output=< Feb 18 16:49:34 crc kubenswrapper[4812]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 16:49:34 crc kubenswrapper[4812]: > Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:32.747571 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="350af9df-062b-44ba-bac2-66417c4dfcef" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.378992 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.104:5353: connect: connection refused" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.754889 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" event={"ID":"52313df7-7636-4aa8-a55e-8520ae930395","Type":"ContainerDied","Data":"e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874"} Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.755264 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e007550d9fd4cfbefbebdfdeca1b4e9127df8ec9997568fc1da17dc063f1e874" Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:33.764854 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 18 16:49:34 crc kubenswrapper[4812]: E0218 16:49:33.765067 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmzw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(e5e514b2-eed7-490c-95b4-f037064f1c56): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.774964 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.873841 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slqk2\" (UniqueName: \"kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2\") pod \"52313df7-7636-4aa8-a55e-8520ae930395\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.874003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config\") pod \"52313df7-7636-4aa8-a55e-8520ae930395\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.874027 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc\") pod \"52313df7-7636-4aa8-a55e-8520ae930395\" (UID: \"52313df7-7636-4aa8-a55e-8520ae930395\") " Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.879298 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2" (OuterVolumeSpecName: "kube-api-access-slqk2") pod "52313df7-7636-4aa8-a55e-8520ae930395" (UID: "52313df7-7636-4aa8-a55e-8520ae930395"). InnerVolumeSpecName "kube-api-access-slqk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.914916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52313df7-7636-4aa8-a55e-8520ae930395" (UID: "52313df7-7636-4aa8-a55e-8520ae930395"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.916673 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config" (OuterVolumeSpecName: "config") pod "52313df7-7636-4aa8-a55e-8520ae930395" (UID: "52313df7-7636-4aa8-a55e-8520ae930395"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.975956 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.975978 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52313df7-7636-4aa8-a55e-8520ae930395-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:33.975989 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slqk2\" (UniqueName: \"kubernetes.io/projected/52313df7-7636-4aa8-a55e-8520ae930395-kube-api-access-slqk2\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:34.765057 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hcvqh" Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:34.791331 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:49:34 crc kubenswrapper[4812]: I0218 16:49:34.799787 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hcvqh"] Feb 18 16:49:36 crc kubenswrapper[4812]: I0218 16:49:36.518817 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52313df7-7636-4aa8-a55e-8520ae930395" path="/var/lib/kubelet/pods/52313df7-7636-4aa8-a55e-8520ae930395/volumes" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.260986 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n9n6z" podUID="2a2e707c-718f-4f17-9b77-c883f7e9d9f3" containerName="ovn-controller" probeResult="failure" output=< Feb 18 16:49:37 crc kubenswrapper[4812]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 16:49:37 crc kubenswrapper[4812]: > Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.327724 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.330022 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s46ps" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.569979 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n9n6z-config-dcrcv"] Feb 18 16:49:37 crc kubenswrapper[4812]: E0218 16:49:37.570460 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42be7b4-a0bc-40c3-b297-1259ce32e320" containerName="mariadb-database-create" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570477 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42be7b4-a0bc-40c3-b297-1259ce32e320" containerName="mariadb-database-create" Feb 18 16:49:37 crc kubenswrapper[4812]: E0218 16:49:37.570497 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="init" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570505 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="init" Feb 18 16:49:37 crc kubenswrapper[4812]: E0218 16:49:37.570521 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570530 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: E0218 16:49:37.570555 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570563 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: E0218 16:49:37.570577 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="init" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570584 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="init" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570793 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42be7b4-a0bc-40c3-b297-1259ce32e320" containerName="mariadb-database-create" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570816 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="52313df7-7636-4aa8-a55e-8520ae930395" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.570827 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bb35e4-8ff1-4f54-b93c-5ebf7bca24b8" containerName="dnsmasq-dns" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.571600 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.578415 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.586406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9n6z-config-dcrcv"] Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746430 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.746710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbc7\" (UniqueName: \"kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848471 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbc7\" (UniqueName: \"kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848643 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848734 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.848869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.849410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.850750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.872875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbc7\" (UniqueName: \"kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7\") pod \"ovn-controller-n9n6z-config-dcrcv\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:37 crc kubenswrapper[4812]: I0218 16:49:37.898781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:38 crc kubenswrapper[4812]: W0218 16:49:38.370683 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod958ae780_e7a1_49d8_b308_e34ead3507b8.slice/crio-117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0 WatchSource:0}: Error finding container 117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0: Status 404 returned error can't find the container with id 117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0 Feb 18 16:49:38 crc kubenswrapper[4812]: I0218 16:49:38.373822 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n9n6z-config-dcrcv"] Feb 18 16:49:38 crc kubenswrapper[4812]: I0218 16:49:38.796779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z-config-dcrcv" event={"ID":"958ae780-e7a1-49d8-b308-e34ead3507b8","Type":"ContainerStarted","Data":"117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0"} Feb 18 16:49:38 crc kubenswrapper[4812]: I0218 16:49:38.799079 4812 generic.go:334] "Generic (PLEG): container finished" podID="d1406358-077f-4147-9645-a0492308800c" containerID="76f5f5cf7144824215bb8ea652fef920830a808e9561f8ea3d472c5a7e3e2b06" exitCode=0 Feb 18 16:49:38 crc kubenswrapper[4812]: I0218 16:49:38.799137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nb6kk" event={"ID":"d1406358-077f-4147-9645-a0492308800c","Type":"ContainerDied","Data":"76f5f5cf7144824215bb8ea652fef920830a808e9561f8ea3d472c5a7e3e2b06"} Feb 18 16:49:39 crc kubenswrapper[4812]: I0218 16:49:39.091890 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 16:49:39 crc kubenswrapper[4812]: I0218 16:49:39.421329 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.128345 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.329979 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts\") pod \"d1406358-077f-4147-9645-a0492308800c\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.330029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv5g4\" (UniqueName: \"kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4\") pod \"d1406358-077f-4147-9645-a0492308800c\" (UID: \"d1406358-077f-4147-9645-a0492308800c\") " Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.330729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1406358-077f-4147-9645-a0492308800c" (UID: "d1406358-077f-4147-9645-a0492308800c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.335834 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4" (OuterVolumeSpecName: "kube-api-access-nv5g4") pod "d1406358-077f-4147-9645-a0492308800c" (UID: "d1406358-077f-4147-9645-a0492308800c"). InnerVolumeSpecName "kube-api-access-nv5g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.432376 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1406358-077f-4147-9645-a0492308800c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.432417 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv5g4\" (UniqueName: \"kubernetes.io/projected/d1406358-077f-4147-9645-a0492308800c-kube-api-access-nv5g4\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.816379 4812 generic.go:334] "Generic (PLEG): container finished" podID="350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" containerID="7e1a476320d946d03983a48bdfcb6820db6d788dd7ca4b12d685550a08a3339e" exitCode=0 Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.816455 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fm48" event={"ID":"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb","Type":"ContainerDied","Data":"7e1a476320d946d03983a48bdfcb6820db6d788dd7ca4b12d685550a08a3339e"} Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.818184 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nb6kk" event={"ID":"d1406358-077f-4147-9645-a0492308800c","Type":"ContainerDied","Data":"bbaa9e16e0f7f3965482d9076e3ea17e53efbe6b108aa698316e7f90d715d46d"} Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.818224 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbaa9e16e0f7f3965482d9076e3ea17e53efbe6b108aa698316e7f90d715d46d" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.818319 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nb6kk" Feb 18 16:49:40 crc kubenswrapper[4812]: I0218 16:49:40.822504 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerStarted","Data":"41ab5483a63b2c0200b16476f984466b5c2b339ba61251824c24cb08c11a21b4"} Feb 18 16:49:41 crc kubenswrapper[4812]: I0218 16:49:41.836797 4812 generic.go:334] "Generic (PLEG): container finished" podID="95955afd-adc9-44b0-93ba-4e4a63292613" containerID="6305bb6714308dd574764743da30b170bc4f215cb7484c891ea63f70d587c7c2" exitCode=0 Feb 18 16:49:41 crc kubenswrapper[4812]: I0218 16:49:41.836885 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-cwvf6" event={"ID":"95955afd-adc9-44b0-93ba-4e4a63292613","Type":"ContainerDied","Data":"6305bb6714308dd574764743da30b170bc4f215cb7484c891ea63f70d587c7c2"} Feb 18 16:49:41 crc kubenswrapper[4812]: I0218 16:49:41.839664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z-config-dcrcv" event={"ID":"958ae780-e7a1-49d8-b308-e34ead3507b8","Type":"ContainerStarted","Data":"269ff17caf7364a796dce14db815495b229998d6a0681347bac8607dd4427df9"} Feb 18 16:49:41 crc kubenswrapper[4812]: I0218 16:49:41.889260 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n9n6z-config-dcrcv" podStartSLOduration=4.889240003 podStartE2EDuration="4.889240003s" podCreationTimestamp="2026-02-18 16:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:49:41.886884004 +0000 UTC m=+1202.152494913" watchObservedRunningTime="2026-02-18 16:49:41.889240003 +0000 UTC m=+1202.154850912" Feb 18 16:49:42 crc kubenswrapper[4812]: I0218 16:49:42.264929 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n9n6z" Feb 18 16:49:42 crc kubenswrapper[4812]: I0218 16:49:42.848345 4812 generic.go:334] "Generic (PLEG): container finished" podID="958ae780-e7a1-49d8-b308-e34ead3507b8" containerID="269ff17caf7364a796dce14db815495b229998d6a0681347bac8607dd4427df9" exitCode=0 Feb 18 16:49:42 crc kubenswrapper[4812]: I0218 16:49:42.848463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z-config-dcrcv" event={"ID":"958ae780-e7a1-49d8-b308-e34ead3507b8","Type":"ContainerDied","Data":"269ff17caf7364a796dce14db815495b229998d6a0681347bac8607dd4427df9"} Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.191264 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.287087 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts\") pod \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.287199 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbckx\" (UniqueName: \"kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx\") pod \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\" (UID: \"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb\") " Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.288262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" (UID: "350d667f-d6e0-4c3f-b5c0-91c11a0aafcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.295266 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx" (OuterVolumeSpecName: "kube-api-access-lbckx") pod "350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" (UID: "350d667f-d6e0-4c3f-b5c0-91c11a0aafcb"). InnerVolumeSpecName "kube-api-access-lbckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.389701 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.389755 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbckx\" (UniqueName: \"kubernetes.io/projected/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb-kube-api-access-lbckx\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.858484 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fm48" event={"ID":"350d667f-d6e0-4c3f-b5c0-91c11a0aafcb","Type":"ContainerDied","Data":"4ec29fac52d0768f88436148d53d33c0538c936659b248a65fd6cd97973d8d6b"} Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.858516 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fm48" Feb 18 16:49:43 crc kubenswrapper[4812]: I0218 16:49:43.858556 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec29fac52d0768f88436148d53d33c0538c936659b248a65fd6cd97973d8d6b" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.578697 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.585182 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts\") pod \"95955afd-adc9-44b0-93ba-4e4a63292613\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665696 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665730 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zbc7\" (UniqueName: \"kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665800 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgrw8\" (UniqueName: \"kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8\") pod \"95955afd-adc9-44b0-93ba-4e4a63292613\" (UID: \"95955afd-adc9-44b0-93ba-4e4a63292613\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665830 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run" (OuterVolumeSpecName: "var-run") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665883 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665912 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.665960 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn\") pod \"958ae780-e7a1-49d8-b308-e34ead3507b8\" (UID: \"958ae780-e7a1-49d8-b308-e34ead3507b8\") " Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.666491 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.666533 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.666559 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.666627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95955afd-adc9-44b0-93ba-4e4a63292613" (UID: "95955afd-adc9-44b0-93ba-4e4a63292613"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.666894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts" (OuterVolumeSpecName: "scripts") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.667098 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.672375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7" (OuterVolumeSpecName: "kube-api-access-8zbc7") pod "958ae780-e7a1-49d8-b308-e34ead3507b8" (UID: "958ae780-e7a1-49d8-b308-e34ead3507b8"). InnerVolumeSpecName "kube-api-access-8zbc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.675222 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8" (OuterVolumeSpecName: "kube-api-access-kgrw8") pod "95955afd-adc9-44b0-93ba-4e4a63292613" (UID: "95955afd-adc9-44b0-93ba-4e4a63292613"). InnerVolumeSpecName "kube-api-access-kgrw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768335 4812 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768397 4812 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768410 4812 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/958ae780-e7a1-49d8-b308-e34ead3507b8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768421 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95955afd-adc9-44b0-93ba-4e4a63292613-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768432 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/958ae780-e7a1-49d8-b308-e34ead3507b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768441 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zbc7\" (UniqueName: \"kubernetes.io/projected/958ae780-e7a1-49d8-b308-e34ead3507b8-kube-api-access-8zbc7\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.768452 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgrw8\" (UniqueName: \"kubernetes.io/projected/95955afd-adc9-44b0-93ba-4e4a63292613-kube-api-access-kgrw8\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.885426 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n9n6z-config-dcrcv" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.885394 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n9n6z-config-dcrcv" event={"ID":"958ae780-e7a1-49d8-b308-e34ead3507b8","Type":"ContainerDied","Data":"117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0"} Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.885588 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117326f7695e6b859d1711477ce592517b59eff785a442cac00c8468472ae8a0" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.887786 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-cwvf6" event={"ID":"95955afd-adc9-44b0-93ba-4e4a63292613","Type":"ContainerDied","Data":"b1d70aecdd3bbb502e232146bbd0cbabae4320cdcc464f9872681bf6d833c061"} Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.887843 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d70aecdd3bbb502e232146bbd0cbabae4320cdcc464f9872681bf6d833c061" Feb 18 16:49:46 crc kubenswrapper[4812]: I0218 16:49:46.888188 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-cwvf6" Feb 18 16:49:47 crc kubenswrapper[4812]: I0218 16:49:47.681729 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n9n6z-config-dcrcv"] Feb 18 16:49:47 crc kubenswrapper[4812]: I0218 16:49:47.689877 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n9n6z-config-dcrcv"] Feb 18 16:49:48 crc kubenswrapper[4812]: I0218 16:49:48.201929 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:49:48 crc kubenswrapper[4812]: E0218 16:49:48.202436 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 16:49:48 crc kubenswrapper[4812]: E0218 16:49:48.202462 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 16:49:48 crc kubenswrapper[4812]: E0218 16:49:48.202535 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift podName:795346dc-bc66-461a-bb9e-64991ac27a50 nodeName:}" failed. No retries permitted until 2026-02-18 16:50:20.20251181 +0000 UTC m=+1240.468122719 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift") pod "swift-storage-0" (UID: "795346dc-bc66-461a-bb9e-64991ac27a50") : configmap "swift-ring-files" not found Feb 18 16:49:48 crc kubenswrapper[4812]: I0218 16:49:48.523524 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958ae780-e7a1-49d8-b308-e34ead3507b8" path="/var/lib/kubelet/pods/958ae780-e7a1-49d8-b308-e34ead3507b8/volumes" Feb 18 16:49:49 crc kubenswrapper[4812]: I0218 16:49:49.091918 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 16:49:49 crc kubenswrapper[4812]: I0218 16:49:49.421490 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.941972 4812 generic.go:334] "Generic (PLEG): container finished" podID="bced8af6-aca7-4de1-96a8-40c4c31a8168" containerID="37da6ef8c87cca73941fecd66b0ed1baaf0ad94244ea8a393850f3ae4e94ecde" exitCode=0 Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.942036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-70ea-account-create-update-z8dpj" event={"ID":"bced8af6-aca7-4de1-96a8-40c4c31a8168","Type":"ContainerDied","Data":"37da6ef8c87cca73941fecd66b0ed1baaf0ad94244ea8a393850f3ae4e94ecde"} Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.943521 4812 generic.go:334] "Generic (PLEG): container finished" podID="95504d5b-50f1-436c-a2fe-21835f70912e" containerID="3107070e262ee24fba9777fd7c59e2932babf6fb5ff435a936fc7fb668d8889b" exitCode=0 Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.943572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-52dc-account-create-update-s4cjj" event={"ID":"95504d5b-50f1-436c-a2fe-21835f70912e","Type":"ContainerDied","Data":"3107070e262ee24fba9777fd7c59e2932babf6fb5ff435a936fc7fb668d8889b"} Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.944883 4812 generic.go:334] "Generic (PLEG): container finished" podID="995feecd-1bbb-4fdb-b368-36f87084d6e5" containerID="29714f3b627c994c2c22ba9d58fea0f2b3c7af25998354c25d01b5996ebf1046" exitCode=0 Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.944919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kjfct" event={"ID":"995feecd-1bbb-4fdb-b368-36f87084d6e5","Type":"ContainerDied","Data":"29714f3b627c994c2c22ba9d58fea0f2b3c7af25998354c25d01b5996ebf1046"} Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.947025 4812 generic.go:334] "Generic (PLEG): container finished" podID="93bd63e5-c276-43f0-8650-bf74a32c7e7f" containerID="b821240bcd8ce78f3f661ba0304c3874a67ffe6291a5b8d2a13de148be49a7a6" exitCode=0 Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.947120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a8b1-account-create-update-785ct" event={"ID":"93bd63e5-c276-43f0-8650-bf74a32c7e7f","Type":"ContainerDied","Data":"b821240bcd8ce78f3f661ba0304c3874a67ffe6291a5b8d2a13de148be49a7a6"} Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.949026 4812 generic.go:334] "Generic (PLEG): container finished" podID="bdda65b0-3132-4e95-a32e-d5772f7f1354" containerID="41b151232d544a4490fab42c933f3b203a98fa585d05bf18d3c177a3cc201723" exitCode=0 Feb 18 16:49:52 crc kubenswrapper[4812]: I0218 16:49:52.949067 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-addf-account-create-update-ltxgg" event={"ID":"bdda65b0-3132-4e95-a32e-d5772f7f1354","Type":"ContainerDied","Data":"41b151232d544a4490fab42c933f3b203a98fa585d05bf18d3c177a3cc201723"} Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.327577 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.436247 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts\") pod \"95504d5b-50f1-436c-a2fe-21835f70912e\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.436548 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22br7\" (UniqueName: \"kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7\") pod \"95504d5b-50f1-436c-a2fe-21835f70912e\" (UID: \"95504d5b-50f1-436c-a2fe-21835f70912e\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.436836 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95504d5b-50f1-436c-a2fe-21835f70912e" (UID: "95504d5b-50f1-436c-a2fe-21835f70912e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.437591 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95504d5b-50f1-436c-a2fe-21835f70912e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.442621 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7" (OuterVolumeSpecName: "kube-api-access-22br7") pod "95504d5b-50f1-436c-a2fe-21835f70912e" (UID: "95504d5b-50f1-436c-a2fe-21835f70912e"). InnerVolumeSpecName "kube-api-access-22br7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: E0218 16:49:54.452940 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.459560 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.476725 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.499990 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.512315 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538548 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts\") pod \"bdda65b0-3132-4e95-a32e-d5772f7f1354\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts\") pod \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxnl9\" (UniqueName: \"kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9\") pod \"bdda65b0-3132-4e95-a32e-d5772f7f1354\" (UID: \"bdda65b0-3132-4e95-a32e-d5772f7f1354\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538860 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts\") pod \"995feecd-1bbb-4fdb-b368-36f87084d6e5\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538953 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrhrq\" (UniqueName: \"kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq\") pod \"995feecd-1bbb-4fdb-b368-36f87084d6e5\" (UID: \"995feecd-1bbb-4fdb-b368-36f87084d6e5\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.538980 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v4fc\" (UniqueName: \"kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc\") pod \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\" (UID: \"93bd63e5-c276-43f0-8650-bf74a32c7e7f\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.539302 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdda65b0-3132-4e95-a32e-d5772f7f1354" (UID: "bdda65b0-3132-4e95-a32e-d5772f7f1354"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.539542 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdda65b0-3132-4e95-a32e-d5772f7f1354-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.539561 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22br7\" (UniqueName: \"kubernetes.io/projected/95504d5b-50f1-436c-a2fe-21835f70912e-kube-api-access-22br7\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.539966 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "995feecd-1bbb-4fdb-b368-36f87084d6e5" (UID: "995feecd-1bbb-4fdb-b368-36f87084d6e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.540532 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93bd63e5-c276-43f0-8650-bf74a32c7e7f" (UID: "93bd63e5-c276-43f0-8650-bf74a32c7e7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.544722 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq" (OuterVolumeSpecName: "kube-api-access-rrhrq") pod "995feecd-1bbb-4fdb-b368-36f87084d6e5" (UID: "995feecd-1bbb-4fdb-b368-36f87084d6e5"). InnerVolumeSpecName "kube-api-access-rrhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.546485 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9" (OuterVolumeSpecName: "kube-api-access-bxnl9") pod "bdda65b0-3132-4e95-a32e-d5772f7f1354" (UID: "bdda65b0-3132-4e95-a32e-d5772f7f1354"). InnerVolumeSpecName "kube-api-access-bxnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.546675 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc" (OuterVolumeSpecName: "kube-api-access-8v4fc") pod "93bd63e5-c276-43f0-8650-bf74a32c7e7f" (UID: "93bd63e5-c276-43f0-8650-bf74a32c7e7f"). InnerVolumeSpecName "kube-api-access-8v4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.641207 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4v5\" (UniqueName: \"kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5\") pod \"bced8af6-aca7-4de1-96a8-40c4c31a8168\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.641546 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts\") pod \"bced8af6-aca7-4de1-96a8-40c4c31a8168\" (UID: \"bced8af6-aca7-4de1-96a8-40c4c31a8168\") " Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.642305 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bced8af6-aca7-4de1-96a8-40c4c31a8168" (UID: "bced8af6-aca7-4de1-96a8-40c4c31a8168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.642812 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8af6-aca7-4de1-96a8-40c4c31a8168-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.642902 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93bd63e5-c276-43f0-8650-bf74a32c7e7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.642962 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxnl9\" (UniqueName: \"kubernetes.io/projected/bdda65b0-3132-4e95-a32e-d5772f7f1354-kube-api-access-bxnl9\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.643026 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/995feecd-1bbb-4fdb-b368-36f87084d6e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.643084 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrhrq\" (UniqueName: \"kubernetes.io/projected/995feecd-1bbb-4fdb-b368-36f87084d6e5-kube-api-access-rrhrq\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.643179 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v4fc\" (UniqueName: \"kubernetes.io/projected/93bd63e5-c276-43f0-8650-bf74a32c7e7f-kube-api-access-8v4fc\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.644027 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5" (OuterVolumeSpecName: "kube-api-access-wf4v5") pod "bced8af6-aca7-4de1-96a8-40c4c31a8168" (UID: "bced8af6-aca7-4de1-96a8-40c4c31a8168"). InnerVolumeSpecName "kube-api-access-wf4v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.746315 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4v5\" (UniqueName: \"kubernetes.io/projected/bced8af6-aca7-4de1-96a8-40c4c31a8168-kube-api-access-wf4v5\") on node \"crc\" DevicePath \"\"" Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.973606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerStarted","Data":"2c939477aa5e33e139412d4e56728f7c3439f1ac3f7793154f1b1336ee71bf30"} Feb 18 16:49:54 crc kubenswrapper[4812]: I0218 16:49:54.986844 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfnnt" event={"ID":"430cd891-febe-45a3-9d5d-97b3933ab503","Type":"ContainerStarted","Data":"eba00445274d0f6b6c11616115989967ff0ff5edd17f2c2d6d064048f49ca7d9"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.004372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-52dc-account-create-update-s4cjj" event={"ID":"95504d5b-50f1-436c-a2fe-21835f70912e","Type":"ContainerDied","Data":"f77d4c50b7642dcd5d3d3417f30b5eeac9ac2dce90ba56ec9212d8a7b3709480"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.004406 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77d4c50b7642dcd5d3d3417f30b5eeac9ac2dce90ba56ec9212d8a7b3709480" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.004493 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-52dc-account-create-update-s4cjj" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.015287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"350af9df-062b-44ba-bac2-66417c4dfcef","Type":"ContainerStarted","Data":"3ceeaea41cf71ae8cd72b4fdf635949bcef6b6ebf60395f2ef5e7dc1e54b6c10"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.016046 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.018638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kjfct" event={"ID":"995feecd-1bbb-4fdb-b368-36f87084d6e5","Type":"ContainerDied","Data":"be4c69fb1c4e682a23f660d7e18f3ddd22767b7ae6366bef1d6c4291717991e9"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.018668 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4c69fb1c4e682a23f660d7e18f3ddd22767b7ae6366bef1d6c4291717991e9" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.018720 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kjfct" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.037224 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a8b1-account-create-update-785ct" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.037454 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a8b1-account-create-update-785ct" event={"ID":"93bd63e5-c276-43f0-8650-bf74a32c7e7f","Type":"ContainerDied","Data":"44b8345871c7249f6c296c540c54901e88e4f953f3e13d9aabf5c1c9ce3d7490"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.037499 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b8345871c7249f6c296c540c54901e88e4f953f3e13d9aabf5c1c9ce3d7490" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.040299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-addf-account-create-update-ltxgg" event={"ID":"bdda65b0-3132-4e95-a32e-d5772f7f1354","Type":"ContainerDied","Data":"1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.040342 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae7b297b105e36c52a6f4b0f00b192a39e0e8353e5c7eb632e6fcc5c9a37fff" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.040419 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-addf-account-create-update-ltxgg" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.043301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-70ea-account-create-update-z8dpj" event={"ID":"bced8af6-aca7-4de1-96a8-40c4c31a8168","Type":"ContainerDied","Data":"fe231f3c4dd2daadda73bd2a0a118066864b54226df0ca2a8ad83c6af091908f"} Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.043339 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe231f3c4dd2daadda73bd2a0a118066864b54226df0ca2a8ad83c6af091908f" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.043389 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-70ea-account-create-update-z8dpj" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.073480 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-pfnnt" podStartSLOduration=3.370654576 podStartE2EDuration="36.073455102s" podCreationTimestamp="2026-02-18 16:49:19 +0000 UTC" firstStartedPulling="2026-02-18 16:49:21.199784655 +0000 UTC m=+1181.465395564" lastFinishedPulling="2026-02-18 16:49:53.902585181 +0000 UTC m=+1214.168196090" observedRunningTime="2026-02-18 16:49:55.027296228 +0000 UTC m=+1215.292907137" watchObservedRunningTime="2026-02-18 16:49:55.073455102 +0000 UTC m=+1215.339066011" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.100960 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.96415032 podStartE2EDuration="43.100936151s" podCreationTimestamp="2026-02-18 16:49:12 +0000 UTC" firstStartedPulling="2026-02-18 16:49:15.052710897 +0000 UTC m=+1175.318321806" lastFinishedPulling="2026-02-18 16:49:54.189496728 +0000 UTC m=+1214.455107637" observedRunningTime="2026-02-18 16:49:55.065822008 +0000 UTC m=+1215.331432917" watchObservedRunningTime="2026-02-18 16:49:55.100936151 +0000 UTC m=+1215.366547060" Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.540683 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kjfct"] Feb 18 16:49:55 crc kubenswrapper[4812]: I0218 16:49:55.550791 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kjfct"] Feb 18 16:49:56 crc kubenswrapper[4812]: I0218 16:49:56.520052 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995feecd-1bbb-4fdb-b368-36f87084d6e5" path="/var/lib/kubelet/pods/995feecd-1bbb-4fdb-b368-36f87084d6e5/volumes" Feb 18 16:49:57 crc kubenswrapper[4812]: I0218 16:49:57.061655 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerStarted","Data":"2bd2e2bb68b55d976bf1e9039d1ff66f2acc1d9d626d04f0029bb0377ccda661"} Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.209499 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.447936952 podStartE2EDuration="1m34.209483196s" podCreationTimestamp="2026-02-18 16:48:24 +0000 UTC" firstStartedPulling="2026-02-18 16:48:44.553206389 +0000 UTC m=+1144.818817298" lastFinishedPulling="2026-02-18 16:49:56.314752633 +0000 UTC m=+1216.580363542" observedRunningTime="2026-02-18 16:49:57.10211365 +0000 UTC m=+1217.367724559" watchObservedRunningTime="2026-02-18 16:49:58.209483196 +0000 UTC m=+1218.475094105" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215141 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rtd9r"] Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215521 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bced8af6-aca7-4de1-96a8-40c4c31a8168" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215543 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bced8af6-aca7-4de1-96a8-40c4c31a8168" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215557 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995feecd-1bbb-4fdb-b368-36f87084d6e5" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215565 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="995feecd-1bbb-4fdb-b368-36f87084d6e5" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215584 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95504d5b-50f1-436c-a2fe-21835f70912e" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215590 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="95504d5b-50f1-436c-a2fe-21835f70912e" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215600 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bd63e5-c276-43f0-8650-bf74a32c7e7f" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215605 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bd63e5-c276-43f0-8650-bf74a32c7e7f" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215616 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958ae780-e7a1-49d8-b308-e34ead3507b8" containerName="ovn-config" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215624 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="958ae780-e7a1-49d8-b308-e34ead3507b8" containerName="ovn-config" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215634 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1406358-077f-4147-9645-a0492308800c" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215639 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1406358-077f-4147-9645-a0492308800c" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215652 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdda65b0-3132-4e95-a32e-d5772f7f1354" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215658 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdda65b0-3132-4e95-a32e-d5772f7f1354" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215672 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95955afd-adc9-44b0-93ba-4e4a63292613" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215678 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="95955afd-adc9-44b0-93ba-4e4a63292613" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: E0218 16:49:58.215686 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215692 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215841 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="995feecd-1bbb-4fdb-b368-36f87084d6e5" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215849 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="95955afd-adc9-44b0-93ba-4e4a63292613" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215862 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bced8af6-aca7-4de1-96a8-40c4c31a8168" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215870 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="93bd63e5-c276-43f0-8650-bf74a32c7e7f" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215878 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="95504d5b-50f1-436c-a2fe-21835f70912e" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215884 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1406358-077f-4147-9645-a0492308800c" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215898 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="958ae780-e7a1-49d8-b308-e34ead3507b8" containerName="ovn-config" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215906 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" containerName="mariadb-database-create" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.215915 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdda65b0-3132-4e95-a32e-d5772f7f1354" containerName="mariadb-account-create-update" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.216584 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.219885 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttnms" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.227945 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rtd9r"] Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.231887 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.322860 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.322934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctnd\" (UniqueName: \"kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.322975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.323007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.424311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.424381 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pctnd\" (UniqueName: \"kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.424421 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.424460 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.429859 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.430049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.430188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.443171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctnd\" (UniqueName: \"kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd\") pod \"glance-db-sync-rtd9r\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:58 crc kubenswrapper[4812]: I0218 16:49:58.532924 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rtd9r" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.094286 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.344138 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v9vtd"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.345481 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.356555 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v9vtd"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.422285 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.444283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mshfn\" (UniqueName: \"kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.444371 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.466015 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4f96-account-create-update-7qlk9"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.467081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.478514 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f96-account-create-update-7qlk9"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.482506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.546417 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.546653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mshfn\" (UniqueName: \"kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.548355 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.567200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mshfn\" (UniqueName: \"kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn\") pod \"cinder-db-create-v9vtd\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.642265 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vpbqr"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.643903 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.649059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kg4z\" (UniqueName: \"kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.650152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.676504 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v9vtd" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.684850 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vpbqr"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.736801 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5934-account-create-update-kslhk"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.738287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.741895 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.756462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kg4z\" (UniqueName: \"kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.756546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.756582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfqx\" (UniqueName: \"kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.756618 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.757441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.785997 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kg4z\" (UniqueName: \"kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z\") pod \"cinder-4f96-account-create-update-7qlk9\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.786933 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.793967 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5934-account-create-update-kslhk"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.802600 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t7vq4"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.804235 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t7vq4" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.835453 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t7vq4"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.859884 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.860021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.860128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sfqx\" (UniqueName: \"kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.860154 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtm7\" (UniqueName: \"kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.860953 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.885407 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c59-account-create-update-qjx5n"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.890813 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.891550 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sfqx\" (UniqueName: \"kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx\") pod \"barbican-db-create-vpbqr\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.893384 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rwlqt"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.894121 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.894650 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.899470 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.901483 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.901715 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-828k4" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.901964 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.903321 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c59-account-create-update-qjx5n"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.914924 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rwlqt"] Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.964520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.964639 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtm7\" (UniqueName: \"kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.964677 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjj78\" (UniqueName: \"kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.964712 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.965498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.988752 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vpbqr" Feb 18 16:49:59 crc kubenswrapper[4812]: I0218 16:49:59.994611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtm7\" (UniqueName: \"kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7\") pod \"barbican-5934-account-create-update-kslhk\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.066545 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dt57f"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068253 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjj78\" (UniqueName: \"kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztrj\" (UniqueName: \"kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068403 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.068476 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8594t\" (UniqueName: \"kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.069456 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.072433 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.081147 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dt57f"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.132510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjj78\" (UniqueName: \"kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78\") pod \"neutron-db-create-t7vq4\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.170871 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztrj\" (UniqueName: \"kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.170930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.170975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.170997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8594t\" (UniqueName: \"kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.171044 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflbw\" (UniqueName: \"kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.171088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.171245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.172165 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.175640 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.176420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.198322 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztrj\" (UniqueName: \"kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj\") pod \"keystone-db-sync-rwlqt\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.205450 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8594t\" (UniqueName: \"kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t\") pod \"neutron-5c59-account-create-update-qjx5n\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.219507 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.241605 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.261686 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.272896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pflbw\" (UniqueName: \"kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.272958 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.273726 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.280777 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.285270 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v9vtd"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.298357 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pflbw\" (UniqueName: \"kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw\") pod \"root-account-create-update-dt57f\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: W0218 16:50:00.308862 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a471a58_4811_48ef_81c1_e1505df74e97.slice/crio-59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402 WatchSource:0}: Error finding container 59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402: Status 404 returned error can't find the container with id 59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402 Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.442751 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-jvwjp"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.446006 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.450649 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.451113 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-k94lt" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.464466 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.486075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-jvwjp"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.544897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4f96-account-create-update-7qlk9"] Feb 18 16:50:00 crc kubenswrapper[4812]: W0218 16:50:00.562010 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a1d4f41_dea7_4312_886c_b8c731ed5094.slice/crio-ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f WatchSource:0}: Error finding container ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f: Status 404 returned error can't find the container with id ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.578657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.578700 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.578716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.578832 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7d7\" (UniqueName: \"kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.635484 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.683081 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb7d7\" (UniqueName: \"kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.683244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.683271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.683290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.683889 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vpbqr"] Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.703212 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.704337 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.708808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.728603 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb7d7\" (UniqueName: \"kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7\") pod \"watcher-db-sync-jvwjp\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:00 crc kubenswrapper[4812]: I0218 16:50:00.789610 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.103032 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t7vq4"] Feb 18 16:50:01 crc kubenswrapper[4812]: W0218 16:50:01.113010 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52149a39_a534_41cf_aa43_7965aa140ad3.slice/crio-41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0 WatchSource:0}: Error finding container 41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0: Status 404 returned error can't find the container with id 41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0 Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.134319 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rwlqt"] Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.162624 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5934-account-create-update-kslhk"] Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.197471 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t7vq4" event={"ID":"52149a39-a534-41cf-aa43-7965aa140ad3","Type":"ContainerStarted","Data":"41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.204000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f96-account-create-update-7qlk9" event={"ID":"9a1d4f41-dea7-4312-886c-b8c731ed5094","Type":"ContainerStarted","Data":"ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.213598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5934-account-create-update-kslhk" event={"ID":"c5618c13-9e4f-409f-bf33-ce07c822b609","Type":"ContainerStarted","Data":"6b9cf57e9254cea07f60353cc0c3dfb05e647e69401ce889dc72b9d2b90e9968"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.215707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vpbqr" event={"ID":"89791338-9ae7-471e-aa30-bff3c2438a5c","Type":"ContainerStarted","Data":"a0930eb482e72a6eb34c0218238dc76d5da39efc374875dbeb58879ca54ef886"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.216446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rwlqt" event={"ID":"f519d561-ebbc-4aff-8d2e-6b98630f5e5f","Type":"ContainerStarted","Data":"927f7d9936c8f52bf21be74e4add308111bfa60ae6f0ef6109338a79f413147d"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.217497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v9vtd" event={"ID":"6a471a58-4811-48ef-81c1-e1505df74e97","Type":"ContainerStarted","Data":"a4abc955e3bc6a488c15d682ce968563418cef9d6d03ed09ed36aa8add2a256b"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.217521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v9vtd" event={"ID":"6a471a58-4811-48ef-81c1-e1505df74e97","Type":"ContainerStarted","Data":"59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402"} Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.381698 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dt57f"] Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.430432 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c59-account-create-update-qjx5n"] Feb 18 16:50:01 crc kubenswrapper[4812]: I0218 16:50:01.577723 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-jvwjp"] Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.158434 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rtd9r"] Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.239921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f96-account-create-update-7qlk9" event={"ID":"9a1d4f41-dea7-4312-886c-b8c731ed5094","Type":"ContainerStarted","Data":"0c79e9ea8de5d888fc7df23060fca302a03ec1ecf4c2e8595fa78b2627794f83"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.246408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5934-account-create-update-kslhk" event={"ID":"c5618c13-9e4f-409f-bf33-ce07c822b609","Type":"ContainerStarted","Data":"a1feac6fea12bef17ae52bd3e11310bf26dc4cf7afa154b7dd64e02cabc50769"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.261080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rtd9r" event={"ID":"0a8de8dc-9b45-45b4-88bb-316168633d73","Type":"ContainerStarted","Data":"1b7d320758908962d5798d2088d71b1e9b27e484c71d0a5b6e9ac062bab07abe"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.273916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vpbqr" event={"ID":"89791338-9ae7-471e-aa30-bff3c2438a5c","Type":"ContainerStarted","Data":"4b02ebf347fa9cdb83c16444b2c3f6104392031087d2ed9c5e46c43896c6e1eb"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.279380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jvwjp" event={"ID":"01adb9d2-b3f9-453d-b8a9-d5811235140c","Type":"ContainerStarted","Data":"f0bd4fac58a150667d139a412fffae39e5dd021cedcd6e7eebab69d1750b5468"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.286269 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-4f96-account-create-update-7qlk9" podStartSLOduration=3.286239887 podStartE2EDuration="3.286239887s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.263500738 +0000 UTC m=+1222.529111647" watchObservedRunningTime="2026-02-18 16:50:02.286239887 +0000 UTC m=+1222.551850796" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.288262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dt57f" event={"ID":"1c9b87a2-ae0c-47db-ad96-2109bd8571c0","Type":"ContainerStarted","Data":"5fd342793938ba57b2f76105d25b2eafde3b46d68ca531bb31e73a4b36dd2e39"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.288359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dt57f" event={"ID":"1c9b87a2-ae0c-47db-ad96-2109bd8571c0","Type":"ContainerStarted","Data":"af9183c2c4dcc6d7c05d0e6e0bbd14d95c7e1d94d967ed8905f8a58110986652"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.293187 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5934-account-create-update-kslhk" podStartSLOduration=3.293165443 podStartE2EDuration="3.293165443s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.28911492 +0000 UTC m=+1222.554725839" watchObservedRunningTime="2026-02-18 16:50:02.293165443 +0000 UTC m=+1222.558776342" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.294924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t7vq4" event={"ID":"52149a39-a534-41cf-aa43-7965aa140ad3","Type":"ContainerStarted","Data":"3b901bed95ae4e36fb7cc344c7019186accebc442efe917f5165d5252b2e2f32"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.307533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c59-account-create-update-qjx5n" event={"ID":"23517ff2-7d34-4754-9b30-f4948ae6b681","Type":"ContainerStarted","Data":"b7d9e889799f584b66beb601d181d89642c40a247ba3583e0ea8d44f42320393"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.307608 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c59-account-create-update-qjx5n" event={"ID":"23517ff2-7d34-4754-9b30-f4948ae6b681","Type":"ContainerStarted","Data":"cde9a3ab9b570f5ec3f14f9a60a83f2721b71ea7d26a32101063f2011a715551"} Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.322255 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vpbqr" podStartSLOduration=3.322233241 podStartE2EDuration="3.322233241s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.304894 +0000 UTC m=+1222.570504929" watchObservedRunningTime="2026-02-18 16:50:02.322233241 +0000 UTC m=+1222.587844150" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.371458 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-v9vtd" podStartSLOduration=3.371438673 podStartE2EDuration="3.371438673s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.362184307 +0000 UTC m=+1222.627795226" watchObservedRunningTime="2026-02-18 16:50:02.371438673 +0000 UTC m=+1222.637049582" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.373021 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dt57f" podStartSLOduration=2.373011233 podStartE2EDuration="2.373011233s" podCreationTimestamp="2026-02-18 16:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.330855731 +0000 UTC m=+1222.596466660" watchObservedRunningTime="2026-02-18 16:50:02.373011233 +0000 UTC m=+1222.638622142" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.392548 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c59-account-create-update-qjx5n" podStartSLOduration=3.392520549 podStartE2EDuration="3.392520549s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.37881165 +0000 UTC m=+1222.644422559" watchObservedRunningTime="2026-02-18 16:50:02.392520549 +0000 UTC m=+1222.658131458" Feb 18 16:50:02 crc kubenswrapper[4812]: I0218 16:50:02.408664 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-t7vq4" podStartSLOduration=3.408645599 podStartE2EDuration="3.408645599s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:50:02.39451992 +0000 UTC m=+1222.660130829" watchObservedRunningTime="2026-02-18 16:50:02.408645599 +0000 UTC m=+1222.674256508" Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.508424 4812 generic.go:334] "Generic (PLEG): container finished" podID="89791338-9ae7-471e-aa30-bff3c2438a5c" containerID="4b02ebf347fa9cdb83c16444b2c3f6104392031087d2ed9c5e46c43896c6e1eb" exitCode=0 Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.508939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vpbqr" event={"ID":"89791338-9ae7-471e-aa30-bff3c2438a5c","Type":"ContainerDied","Data":"4b02ebf347fa9cdb83c16444b2c3f6104392031087d2ed9c5e46c43896c6e1eb"} Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.516433 4812 generic.go:334] "Generic (PLEG): container finished" podID="6a471a58-4811-48ef-81c1-e1505df74e97" containerID="a4abc955e3bc6a488c15d682ce968563418cef9d6d03ed09ed36aa8add2a256b" exitCode=0 Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.516524 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v9vtd" event={"ID":"6a471a58-4811-48ef-81c1-e1505df74e97","Type":"ContainerDied","Data":"a4abc955e3bc6a488c15d682ce968563418cef9d6d03ed09ed36aa8add2a256b"} Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.524950 4812 generic.go:334] "Generic (PLEG): container finished" podID="52149a39-a534-41cf-aa43-7965aa140ad3" containerID="3b901bed95ae4e36fb7cc344c7019186accebc442efe917f5165d5252b2e2f32" exitCode=0 Feb 18 16:50:03 crc kubenswrapper[4812]: I0218 16:50:03.525688 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t7vq4" event={"ID":"52149a39-a534-41cf-aa43-7965aa140ad3","Type":"ContainerDied","Data":"3b901bed95ae4e36fb7cc344c7019186accebc442efe917f5165d5252b2e2f32"} Feb 18 16:50:04 crc kubenswrapper[4812]: I0218 16:50:04.549719 4812 generic.go:334] "Generic (PLEG): container finished" podID="9a1d4f41-dea7-4312-886c-b8c731ed5094" containerID="0c79e9ea8de5d888fc7df23060fca302a03ec1ecf4c2e8595fa78b2627794f83" exitCode=0 Feb 18 16:50:04 crc kubenswrapper[4812]: I0218 16:50:04.550209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f96-account-create-update-7qlk9" event={"ID":"9a1d4f41-dea7-4312-886c-b8c731ed5094","Type":"ContainerDied","Data":"0c79e9ea8de5d888fc7df23060fca302a03ec1ecf4c2e8595fa78b2627794f83"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.062498 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v9vtd" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.176019 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.184046 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vpbqr" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.233608 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts\") pod \"6a471a58-4811-48ef-81c1-e1505df74e97\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.233746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mshfn\" (UniqueName: \"kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn\") pod \"6a471a58-4811-48ef-81c1-e1505df74e97\" (UID: \"6a471a58-4811-48ef-81c1-e1505df74e97\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.234508 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a471a58-4811-48ef-81c1-e1505df74e97" (UID: "6a471a58-4811-48ef-81c1-e1505df74e97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.281428 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn" (OuterVolumeSpecName: "kube-api-access-mshfn") pod "6a471a58-4811-48ef-81c1-e1505df74e97" (UID: "6a471a58-4811-48ef-81c1-e1505df74e97"). InnerVolumeSpecName "kube-api-access-mshfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.336722 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjj78\" (UniqueName: \"kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78\") pod \"52149a39-a534-41cf-aa43-7965aa140ad3\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.336816 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sfqx\" (UniqueName: \"kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx\") pod \"89791338-9ae7-471e-aa30-bff3c2438a5c\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.336845 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts\") pod \"89791338-9ae7-471e-aa30-bff3c2438a5c\" (UID: \"89791338-9ae7-471e-aa30-bff3c2438a5c\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.337045 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts\") pod \"52149a39-a534-41cf-aa43-7965aa140ad3\" (UID: \"52149a39-a534-41cf-aa43-7965aa140ad3\") " Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.337552 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mshfn\" (UniqueName: \"kubernetes.io/projected/6a471a58-4811-48ef-81c1-e1505df74e97-kube-api-access-mshfn\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.337573 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a471a58-4811-48ef-81c1-e1505df74e97-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.337967 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89791338-9ae7-471e-aa30-bff3c2438a5c" (UID: "89791338-9ae7-471e-aa30-bff3c2438a5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.338053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52149a39-a534-41cf-aa43-7965aa140ad3" (UID: "52149a39-a534-41cf-aa43-7965aa140ad3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.342188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78" (OuterVolumeSpecName: "kube-api-access-pjj78") pod "52149a39-a534-41cf-aa43-7965aa140ad3" (UID: "52149a39-a534-41cf-aa43-7965aa140ad3"). InnerVolumeSpecName "kube-api-access-pjj78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.342768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx" (OuterVolumeSpecName: "kube-api-access-8sfqx") pod "89791338-9ae7-471e-aa30-bff3c2438a5c" (UID: "89791338-9ae7-471e-aa30-bff3c2438a5c"). InnerVolumeSpecName "kube-api-access-8sfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.439144 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52149a39-a534-41cf-aa43-7965aa140ad3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.439186 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjj78\" (UniqueName: \"kubernetes.io/projected/52149a39-a534-41cf-aa43-7965aa140ad3-kube-api-access-pjj78\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.439200 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sfqx\" (UniqueName: \"kubernetes.io/projected/89791338-9ae7-471e-aa30-bff3c2438a5c-kube-api-access-8sfqx\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.439208 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89791338-9ae7-471e-aa30-bff3c2438a5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.566386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vpbqr" event={"ID":"89791338-9ae7-471e-aa30-bff3c2438a5c","Type":"ContainerDied","Data":"a0930eb482e72a6eb34c0218238dc76d5da39efc374875dbeb58879ca54ef886"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.566465 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0930eb482e72a6eb34c0218238dc76d5da39efc374875dbeb58879ca54ef886" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.566497 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vpbqr" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.569963 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c9b87a2-ae0c-47db-ad96-2109bd8571c0" containerID="5fd342793938ba57b2f76105d25b2eafde3b46d68ca531bb31e73a4b36dd2e39" exitCode=0 Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.570062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dt57f" event={"ID":"1c9b87a2-ae0c-47db-ad96-2109bd8571c0","Type":"ContainerDied","Data":"5fd342793938ba57b2f76105d25b2eafde3b46d68ca531bb31e73a4b36dd2e39"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.578415 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v9vtd" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.579388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v9vtd" event={"ID":"6a471a58-4811-48ef-81c1-e1505df74e97","Type":"ContainerDied","Data":"59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.579440 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f5d4dc1879056eb5cfae0d37d3e60de456ce37d9fe5100ec0e2fdf388ed402" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.592379 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t7vq4" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.592381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t7vq4" event={"ID":"52149a39-a534-41cf-aa43-7965aa140ad3","Type":"ContainerDied","Data":"41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.592562 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c5e0e385768f5083b3ce630872ecfa9fae9b21892c844078c9ae8bc86d1bf0" Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.595131 4812 generic.go:334] "Generic (PLEG): container finished" podID="23517ff2-7d34-4754-9b30-f4948ae6b681" containerID="b7d9e889799f584b66beb601d181d89642c40a247ba3583e0ea8d44f42320393" exitCode=0 Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.595179 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c59-account-create-update-qjx5n" event={"ID":"23517ff2-7d34-4754-9b30-f4948ae6b681","Type":"ContainerDied","Data":"b7d9e889799f584b66beb601d181d89642c40a247ba3583e0ea8d44f42320393"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.602502 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5618c13-9e4f-409f-bf33-ce07c822b609" containerID="a1feac6fea12bef17ae52bd3e11310bf26dc4cf7afa154b7dd64e02cabc50769" exitCode=0 Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.602700 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5934-account-create-update-kslhk" event={"ID":"c5618c13-9e4f-409f-bf33-ce07c822b609","Type":"ContainerDied","Data":"a1feac6fea12bef17ae52bd3e11310bf26dc4cf7afa154b7dd64e02cabc50769"} Feb 18 16:50:05 crc kubenswrapper[4812]: I0218 16:50:05.940542 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.052339 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts\") pod \"9a1d4f41-dea7-4312-886c-b8c731ed5094\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.052927 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kg4z\" (UniqueName: \"kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z\") pod \"9a1d4f41-dea7-4312-886c-b8c731ed5094\" (UID: \"9a1d4f41-dea7-4312-886c-b8c731ed5094\") " Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.053267 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a1d4f41-dea7-4312-886c-b8c731ed5094" (UID: "9a1d4f41-dea7-4312-886c-b8c731ed5094"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.053645 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1d4f41-dea7-4312-886c-b8c731ed5094-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.059494 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z" (OuterVolumeSpecName: "kube-api-access-8kg4z") pod "9a1d4f41-dea7-4312-886c-b8c731ed5094" (UID: "9a1d4f41-dea7-4312-886c-b8c731ed5094"). InnerVolumeSpecName "kube-api-access-8kg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.155239 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kg4z\" (UniqueName: \"kubernetes.io/projected/9a1d4f41-dea7-4312-886c-b8c731ed5094-kube-api-access-8kg4z\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.623005 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4f96-account-create-update-7qlk9" event={"ID":"9a1d4f41-dea7-4312-886c-b8c731ed5094","Type":"ContainerDied","Data":"ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f"} Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.623081 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8b34013cb7c6cbd0f8a08f8a416cd46f9c826ea60ab46b35a4b1efb02c0b5f" Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.625663 4812 generic.go:334] "Generic (PLEG): container finished" podID="430cd891-febe-45a3-9d5d-97b3933ab503" containerID="eba00445274d0f6b6c11616115989967ff0ff5edd17f2c2d6d064048f49ca7d9" exitCode=0 Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.625884 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfnnt" event={"ID":"430cd891-febe-45a3-9d5d-97b3933ab503","Type":"ContainerDied","Data":"eba00445274d0f6b6c11616115989967ff0ff5edd17f2c2d6d064048f49ca7d9"} Feb 18 16:50:06 crc kubenswrapper[4812]: I0218 16:50:06.626542 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4f96-account-create-update-7qlk9" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:10.630051 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:10.632789 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:10.662268 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:13.275039 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:13.275619 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="config-reloader" containerID="cri-o://41ab5483a63b2c0200b16476f984466b5c2b339ba61251824c24cb08c11a21b4" gracePeriod=600 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:13.275734 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" containerID="cri-o://2bd2e2bb68b55d976bf1e9039d1ff66f2acc1d9d626d04f0029bb0377ccda661" gracePeriod=600 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:13.275717 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="thanos-sidecar" containerID="cri-o://2c939477aa5e33e139412d4e56728f7c3439f1ac3f7793154f1b1336ee71bf30" gracePeriod=600 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:13.520790 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:15.641280 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:16.710004 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerID="2c939477aa5e33e139412d4e56728f7c3439f1ac3f7793154f1b1336ee71bf30" exitCode=0 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:16.710093 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerDied","Data":"2c939477aa5e33e139412d4e56728f7c3439f1ac3f7793154f1b1336ee71bf30"} Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:17.720255 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerID="41ab5483a63b2c0200b16476f984466b5c2b339ba61251824c24cb08c11a21b4" exitCode=0 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:17.720363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerDied","Data":"41ab5483a63b2c0200b16476f984466b5c2b339ba61251824c24cb08c11a21b4"} Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.660545 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.732027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c59-account-create-update-qjx5n" event={"ID":"23517ff2-7d34-4754-9b30-f4948ae6b681","Type":"ContainerDied","Data":"cde9a3ab9b570f5ec3f14f9a60a83f2721b71ea7d26a32101063f2011a715551"} Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.732358 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde9a3ab9b570f5ec3f14f9a60a83f2721b71ea7d26a32101063f2011a715551" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.732091 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c59-account-create-update-qjx5n" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.801173 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8594t\" (UniqueName: \"kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t\") pod \"23517ff2-7d34-4754-9b30-f4948ae6b681\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.801245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts\") pod \"23517ff2-7d34-4754-9b30-f4948ae6b681\" (UID: \"23517ff2-7d34-4754-9b30-f4948ae6b681\") " Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.802203 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23517ff2-7d34-4754-9b30-f4948ae6b681" (UID: "23517ff2-7d34-4754-9b30-f4948ae6b681"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.826855 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t" (OuterVolumeSpecName: "kube-api-access-8594t") pod "23517ff2-7d34-4754-9b30-f4948ae6b681" (UID: "23517ff2-7d34-4754-9b30-f4948ae6b681"). InnerVolumeSpecName "kube-api-access-8594t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.903481 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8594t\" (UniqueName: \"kubernetes.io/projected/23517ff2-7d34-4754-9b30-f4948ae6b681-kube-api-access-8594t\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:18.903514 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23517ff2-7d34-4754-9b30-f4948ae6b681-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:19.905627 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerID="2bd2e2bb68b55d976bf1e9039d1ff66f2acc1d9d626d04f0029bb0377ccda661" exitCode=0 Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:19.905699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerDied","Data":"2bd2e2bb68b55d976bf1e9039d1ff66f2acc1d9d626d04f0029bb0377ccda661"} Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:20.303715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:20.311462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/795346dc-bc66-461a-bb9e-64991ac27a50-etc-swift\") pod \"swift-storage-0\" (UID: \"795346dc-bc66-461a-bb9e-64991ac27a50\") " pod="openstack/swift-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:20.502062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 16:50:22 crc kubenswrapper[4812]: I0218 16:50:20.631748 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 16:50:22 crc kubenswrapper[4812]: E0218 16:50:22.749164 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 18 16:50:22 crc kubenswrapper[4812]: E0218 16:50:22.749326 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kztrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-rwlqt_openstack(f519d561-ebbc-4aff-8d2e-6b98630f5e5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:50:22 crc kubenswrapper[4812]: E0218 16:50:22.753264 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-rwlqt" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" Feb 18 16:50:22 crc kubenswrapper[4812]: E0218 16:50:22.954018 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-rwlqt" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" Feb 18 16:50:25 crc kubenswrapper[4812]: I0218 16:50:25.630441 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 16:50:25 crc kubenswrapper[4812]: I0218 16:50:25.631148 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.474677 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569294 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569383 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84zw7\" (UniqueName: \"kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569451 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569614 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.569644 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf\") pod \"430cd891-febe-45a3-9d5d-97b3933ab503\" (UID: \"430cd891-febe-45a3-9d5d-97b3933ab503\") " Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.571550 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.571934 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.579075 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7" (OuterVolumeSpecName: "kube-api-access-84zw7") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "kube-api-access-84zw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.585729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.598022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts" (OuterVolumeSpecName: "scripts") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.599236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.600146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "430cd891-febe-45a3-9d5d-97b3933ab503" (UID: "430cd891-febe-45a3-9d5d-97b3933ab503"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671785 4812 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671826 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671838 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84zw7\" (UniqueName: \"kubernetes.io/projected/430cd891-febe-45a3-9d5d-97b3933ab503-kube-api-access-84zw7\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671854 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430cd891-febe-45a3-9d5d-97b3933ab503-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671865 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/430cd891-febe-45a3-9d5d-97b3933ab503-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671876 4812 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.671887 4812 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/430cd891-febe-45a3-9d5d-97b3933ab503-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.982700 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pfnnt" event={"ID":"430cd891-febe-45a3-9d5d-97b3933ab503","Type":"ContainerDied","Data":"f5915bf0f48748dc68a1eefd7103abcb91272a90376d863446f2a94a5eec0bd1"} Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.983213 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5915bf0f48748dc68a1eefd7103abcb91272a90376d863446f2a94a5eec0bd1" Feb 18 16:50:27 crc kubenswrapper[4812]: I0218 16:50:27.982773 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pfnnt" Feb 18 16:50:30 crc kubenswrapper[4812]: I0218 16:50:30.631070 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 16:50:35 crc kubenswrapper[4812]: I0218 16:50:35.630848 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 16:50:40 crc kubenswrapper[4812]: I0218 16:50:40.631356 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 16:50:44 crc kubenswrapper[4812]: E0218 16:50:44.654194 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 16:50:44 crc kubenswrapper[4812]: E0218 16:50:44.654892 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pctnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-rtd9r_openstack(0a8de8dc-9b45-45b4-88bb-316168633d73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:50:44 crc kubenswrapper[4812]: E0218 16:50:44.656060 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-rtd9r" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.684956 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.690640 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.824762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts\") pod \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.824817 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts\") pod \"c5618c13-9e4f-409f-bf33-ce07c822b609\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.824850 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pflbw\" (UniqueName: \"kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw\") pod \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\" (UID: \"1c9b87a2-ae0c-47db-ad96-2109bd8571c0\") " Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.825062 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhtm7\" (UniqueName: \"kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7\") pod \"c5618c13-9e4f-409f-bf33-ce07c822b609\" (UID: \"c5618c13-9e4f-409f-bf33-ce07c822b609\") " Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.825796 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c9b87a2-ae0c-47db-ad96-2109bd8571c0" (UID: "1c9b87a2-ae0c-47db-ad96-2109bd8571c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.826173 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5618c13-9e4f-409f-bf33-ce07c822b609" (UID: "c5618c13-9e4f-409f-bf33-ce07c822b609"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.830976 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7" (OuterVolumeSpecName: "kube-api-access-xhtm7") pod "c5618c13-9e4f-409f-bf33-ce07c822b609" (UID: "c5618c13-9e4f-409f-bf33-ce07c822b609"). InnerVolumeSpecName "kube-api-access-xhtm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.837373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw" (OuterVolumeSpecName: "kube-api-access-pflbw") pod "1c9b87a2-ae0c-47db-ad96-2109bd8571c0" (UID: "1c9b87a2-ae0c-47db-ad96-2109bd8571c0"). InnerVolumeSpecName "kube-api-access-pflbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.927574 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.927620 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5618c13-9e4f-409f-bf33-ce07c822b609-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.927634 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pflbw\" (UniqueName: \"kubernetes.io/projected/1c9b87a2-ae0c-47db-ad96-2109bd8571c0-kube-api-access-pflbw\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:44 crc kubenswrapper[4812]: I0218 16:50:44.927647 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhtm7\" (UniqueName: \"kubernetes.io/projected/c5618c13-9e4f-409f-bf33-ce07c822b609-kube-api-access-xhtm7\") on node \"crc\" DevicePath \"\"" Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.124564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5934-account-create-update-kslhk" event={"ID":"c5618c13-9e4f-409f-bf33-ce07c822b609","Type":"ContainerDied","Data":"6b9cf57e9254cea07f60353cc0c3dfb05e647e69401ce889dc72b9d2b90e9968"} Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.124607 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5934-account-create-update-kslhk" Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.124622 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9cf57e9254cea07f60353cc0c3dfb05e647e69401ce889dc72b9d2b90e9968" Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.126833 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dt57f" event={"ID":"1c9b87a2-ae0c-47db-ad96-2109bd8571c0","Type":"ContainerDied","Data":"af9183c2c4dcc6d7c05d0e6e0bbd14d95c7e1d94d967ed8905f8a58110986652"} Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.126892 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9183c2c4dcc6d7c05d0e6e0bbd14d95c7e1d94d967ed8905f8a58110986652" Feb 18 16:50:45 crc kubenswrapper[4812]: I0218 16:50:45.126893 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dt57f" Feb 18 16:50:45 crc kubenswrapper[4812]: E0218 16:50:45.135770 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-rtd9r" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" Feb 18 16:50:46 crc kubenswrapper[4812]: I0218 16:50:46.028734 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dt57f"] Feb 18 16:50:46 crc kubenswrapper[4812]: I0218 16:50:46.044851 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dt57f"] Feb 18 16:50:46 crc kubenswrapper[4812]: I0218 16:50:46.519935 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9b87a2-ae0c-47db-ad96-2109bd8571c0" path="/var/lib/kubelet/pods/1c9b87a2-ae0c-47db-ad96-2109bd8571c0/volumes" Feb 18 16:50:48 crc kubenswrapper[4812]: I0218 16:50:48.631340 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.243704 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cx8kk"] Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244325 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9b87a2-ae0c-47db-ad96-2109bd8571c0" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244336 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9b87a2-ae0c-47db-ad96-2109bd8571c0" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244358 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89791338-9ae7-471e-aa30-bff3c2438a5c" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244366 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="89791338-9ae7-471e-aa30-bff3c2438a5c" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244379 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1d4f41-dea7-4312-886c-b8c731ed5094" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244387 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1d4f41-dea7-4312-886c-b8c731ed5094" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244397 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52149a39-a534-41cf-aa43-7965aa140ad3" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244403 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52149a39-a534-41cf-aa43-7965aa140ad3" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244413 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430cd891-febe-45a3-9d5d-97b3933ab503" containerName="swift-ring-rebalance" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244418 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="430cd891-febe-45a3-9d5d-97b3933ab503" containerName="swift-ring-rebalance" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244431 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23517ff2-7d34-4754-9b30-f4948ae6b681" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244436 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="23517ff2-7d34-4754-9b30-f4948ae6b681" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244449 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a471a58-4811-48ef-81c1-e1505df74e97" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244454 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a471a58-4811-48ef-81c1-e1505df74e97" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: E0218 16:50:50.244460 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5618c13-9e4f-409f-bf33-ce07c822b609" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244466 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5618c13-9e4f-409f-bf33-ce07c822b609" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244611 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9b87a2-ae0c-47db-ad96-2109bd8571c0" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244627 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a471a58-4811-48ef-81c1-e1505df74e97" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244635 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="430cd891-febe-45a3-9d5d-97b3933ab503" containerName="swift-ring-rebalance" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244646 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="89791338-9ae7-471e-aa30-bff3c2438a5c" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244655 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="23517ff2-7d34-4754-9b30-f4948ae6b681" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244662 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1d4f41-dea7-4312-886c-b8c731ed5094" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244668 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5618c13-9e4f-409f-bf33-ce07c822b609" containerName="mariadb-account-create-update" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.244676 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="52149a39-a534-41cf-aa43-7965aa140ad3" containerName="mariadb-database-create" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.245182 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.261451 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.282586 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cx8kk"] Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.346250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86k72\" (UniqueName: \"kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.346321 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.450454 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86k72\" (UniqueName: \"kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.450561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.451409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.476632 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86k72\" (UniqueName: \"kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72\") pod \"root-account-create-update-cx8kk\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:50 crc kubenswrapper[4812]: I0218 16:50:50.596991 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cx8kk" Feb 18 16:50:53 crc kubenswrapper[4812]: I0218 16:50:53.630375 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:50:58 crc kubenswrapper[4812]: I0218 16:50:58.631992 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.053079 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150487 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150616 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150678 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150697 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150751 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150814 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.150845 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmzw2\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2\") pod \"e5e514b2-eed7-490c-95b4-f037064f1c56\" (UID: \"e5e514b2-eed7-490c-95b4-f037064f1c56\") " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.151220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.151293 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.151317 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.156347 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.156824 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2" (OuterVolumeSpecName: "kube-api-access-wmzw2") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "kube-api-access-wmzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.156858 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out" (OuterVolumeSpecName: "config-out") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.156909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.169313 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.169743 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config" (OuterVolumeSpecName: "config") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.179681 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config" (OuterVolumeSpecName: "web-config") pod "e5e514b2-eed7-490c-95b4-f037064f1c56" (UID: "e5e514b2-eed7-490c-95b4-f037064f1c56"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.253804 4812 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254145 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254160 4812 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254169 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254178 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e5e514b2-eed7-490c-95b4-f037064f1c56-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254187 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254197 4812 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e5e514b2-eed7-490c-95b4-f037064f1c56-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254206 4812 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e5e514b2-eed7-490c-95b4-f037064f1c56-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254215 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmzw2\" (UniqueName: \"kubernetes.io/projected/e5e514b2-eed7-490c-95b4-f037064f1c56-kube-api-access-wmzw2\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.254249 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") on node \"crc\" " Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.274556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e5e514b2-eed7-490c-95b4-f037064f1c56","Type":"ContainerDied","Data":"61009d85a65e5cd74404c1212c253c551f0f4fc5bf8eae7b393d9922a37c4549"} Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.274607 4812 scope.go:117] "RemoveContainer" containerID="2bd2e2bb68b55d976bf1e9039d1ff66f2acc1d9d626d04f0029bb0377ccda661" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.274709 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.276191 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.276347 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651") on node "crc" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.316128 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.325607 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336049 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:51:02 crc kubenswrapper[4812]: E0218 16:51:02.336443 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="thanos-sidecar" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336465 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="thanos-sidecar" Feb 18 16:51:02 crc kubenswrapper[4812]: E0218 16:51:02.336493 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336502 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" Feb 18 16:51:02 crc kubenswrapper[4812]: E0218 16:51:02.336518 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="config-reloader" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336524 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="config-reloader" Feb 18 16:51:02 crc kubenswrapper[4812]: E0218 16:51:02.336537 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="init-config-reloader" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336543 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="init-config-reloader" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336695 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="thanos-sidecar" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336706 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="config-reloader" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.336718 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.338166 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342124 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342309 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2nrwm" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342352 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342405 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342473 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342634 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.342676 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.344283 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.347150 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.359846 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.361780 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461262 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461369 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnjm\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461388 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461440 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461520 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461543 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461564 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.461588 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.520013 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" path="/var/lib/kubelet/pods/e5e514b2-eed7-490c-95b4-f037064f1c56/volumes" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562742 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnjm\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562918 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562958 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.562986 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.563019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.563048 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.563075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.563130 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.563921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.564568 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.564653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.565810 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.565841 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af460646b9286704a29606a0b72ed4f0b878dd755da4447874f6899e9b871ead/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.567562 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.567972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.568037 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.570130 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.572030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.572554 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.574957 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.578867 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.581244 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnjm\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.591287 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:02 crc kubenswrapper[4812]: I0218 16:51:02.663255 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:03 crc kubenswrapper[4812]: I0218 16:51:03.413764 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:51:03 crc kubenswrapper[4812]: I0218 16:51:03.414897 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:51:03 crc kubenswrapper[4812]: I0218 16:51:03.630677 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e5e514b2-eed7-490c-95b4-f037064f1c56" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:51:10 crc kubenswrapper[4812]: E0218 16:51:10.644227 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 18 16:51:10 crc kubenswrapper[4812]: E0218 16:51:10.644999 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kztrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-rwlqt_openstack(f519d561-ebbc-4aff-8d2e-6b98630f5e5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:51:10 crc kubenswrapper[4812]: E0218 16:51:10.647083 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-rwlqt" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" Feb 18 16:51:15 crc kubenswrapper[4812]: I0218 16:51:15.181856 4812 scope.go:117] "RemoveContainer" containerID="2c939477aa5e33e139412d4e56728f7c3439f1ac3f7793154f1b1336ee71bf30" Feb 18 16:51:15 crc kubenswrapper[4812]: I0218 16:51:15.215089 4812 scope.go:117] "RemoveContainer" containerID="41ab5483a63b2c0200b16476f984466b5c2b339ba61251824c24cb08c11a21b4" Feb 18 16:51:15 crc kubenswrapper[4812]: I0218 16:51:15.335672 4812 scope.go:117] "RemoveContainer" containerID="778bc9fb9cd4276fca153fd0e8737437821f6a315fdfae2166fbc1278a9581ee" Feb 18 16:51:15 crc kubenswrapper[4812]: I0218 16:51:15.491966 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cx8kk"] Feb 18 16:51:15 crc kubenswrapper[4812]: W0218 16:51:15.493969 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c7367f8_1c3f_46c9_8c65_d668b6244622.slice/crio-3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4 WatchSource:0}: Error finding container 3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4: Status 404 returned error can't find the container with id 3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4 Feb 18 16:51:15 crc kubenswrapper[4812]: E0218 16:51:15.538728 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 16:51:15 crc kubenswrapper[4812]: E0218 16:51:15.538806 4812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.243:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 16:51:15 crc kubenswrapper[4812]: E0218 16:51:15.538961 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.243:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-jvwjp_openstack(01adb9d2-b3f9-453d-b8a9-d5811235140c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:51:15 crc kubenswrapper[4812]: E0218 16:51:15.541623 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-jvwjp" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" Feb 18 16:51:15 crc kubenswrapper[4812]: I0218 16:51:15.641828 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 16:51:15 crc kubenswrapper[4812]: W0218 16:51:15.645564 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8d94f2a_b628_40a4_ad97_96c41ea2940a.slice/crio-e11c329bae96c7fcd447ac5a10e90ddb64399756f42353b61a4af80f173c6455 WatchSource:0}: Error finding container e11c329bae96c7fcd447ac5a10e90ddb64399756f42353b61a4af80f173c6455: Status 404 returned error can't find the container with id e11c329bae96c7fcd447ac5a10e90ddb64399756f42353b61a4af80f173c6455 Feb 18 16:51:16 crc kubenswrapper[4812]: I0218 16:51:16.397381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cx8kk" event={"ID":"0c7367f8-1c3f-46c9-8c65-d668b6244622","Type":"ContainerStarted","Data":"3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4"} Feb 18 16:51:16 crc kubenswrapper[4812]: I0218 16:51:16.398830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerStarted","Data":"e11c329bae96c7fcd447ac5a10e90ddb64399756f42353b61a4af80f173c6455"} Feb 18 16:51:16 crc kubenswrapper[4812]: E0218 16:51:16.401132 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.243:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-jvwjp" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" Feb 18 16:51:17 crc kubenswrapper[4812]: I0218 16:51:17.408919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cx8kk" event={"ID":"0c7367f8-1c3f-46c9-8c65-d668b6244622","Type":"ContainerStarted","Data":"cb441b52241e73310486dc17ee57f049da999ecc2976441274f8cff8ee249d2e"} Feb 18 16:51:17 crc kubenswrapper[4812]: I0218 16:51:17.586384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 16:51:18 crc kubenswrapper[4812]: I0218 16:51:18.421419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"30d0e0b6d3b3928a9f7e036d5b25f54de20998f72442b57a2f24c23e3c5b43e3"} Feb 18 16:51:18 crc kubenswrapper[4812]: I0218 16:51:18.438796 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cx8kk" podStartSLOduration=28.43878136 podStartE2EDuration="28.43878136s" podCreationTimestamp="2026-02-18 16:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:51:18.436511023 +0000 UTC m=+1298.702121952" watchObservedRunningTime="2026-02-18 16:51:18.43878136 +0000 UTC m=+1298.704392269" Feb 18 16:51:19 crc kubenswrapper[4812]: I0218 16:51:19.429924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerStarted","Data":"3bf962ec4f7eadb961e74c8ccbeb100afa6ad08bfe029a1f79c48d4164e95240"} Feb 18 16:51:29 crc kubenswrapper[4812]: I0218 16:51:29.516228 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerID="3bf962ec4f7eadb961e74c8ccbeb100afa6ad08bfe029a1f79c48d4164e95240" exitCode=0 Feb 18 16:51:29 crc kubenswrapper[4812]: I0218 16:51:29.516313 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerDied","Data":"3bf962ec4f7eadb961e74c8ccbeb100afa6ad08bfe029a1f79c48d4164e95240"} Feb 18 16:51:31 crc kubenswrapper[4812]: I0218 16:51:31.533502 4812 generic.go:334] "Generic (PLEG): container finished" podID="0c7367f8-1c3f-46c9-8c65-d668b6244622" containerID="cb441b52241e73310486dc17ee57f049da999ecc2976441274f8cff8ee249d2e" exitCode=0 Feb 18 16:51:31 crc kubenswrapper[4812]: I0218 16:51:31.533591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cx8kk" event={"ID":"0c7367f8-1c3f-46c9-8c65-d668b6244622","Type":"ContainerDied","Data":"cb441b52241e73310486dc17ee57f049da999ecc2976441274f8cff8ee249d2e"} Feb 18 16:51:32 crc kubenswrapper[4812]: I0218 16:51:32.546435 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rtd9r" event={"ID":"0a8de8dc-9b45-45b4-88bb-316168633d73","Type":"ContainerStarted","Data":"bc74b5bf4c3d20bae26ba5febbcb47a79e9df639094d0216b433df9bcb32acf0"} Feb 18 16:51:32 crc kubenswrapper[4812]: I0218 16:51:32.557439 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerStarted","Data":"06d95caf9f6d500c8886b19094b812db611a76a5ba7bb16c54c8e30e2e6d4a56"} Feb 18 16:51:33 crc kubenswrapper[4812]: I0218 16:51:33.414178 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:51:33 crc kubenswrapper[4812]: I0218 16:51:33.414636 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.575167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cx8kk" event={"ID":"0c7367f8-1c3f-46c9-8c65-d668b6244622","Type":"ContainerDied","Data":"3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4"} Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.575209 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3251d6c020cdbed19e997fa48d996b5b58088774dedcdf0c628772de7d1837c4" Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.878023 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cx8kk" Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.908679 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rtd9r" podStartSLOduration=7.491667791 podStartE2EDuration="1m36.908651633s" podCreationTimestamp="2026-02-18 16:49:58 +0000 UTC" firstStartedPulling="2026-02-18 16:50:02.17826851 +0000 UTC m=+1222.443879419" lastFinishedPulling="2026-02-18 16:51:31.595252352 +0000 UTC m=+1311.860863261" observedRunningTime="2026-02-18 16:51:32.570612689 +0000 UTC m=+1312.836223598" watchObservedRunningTime="2026-02-18 16:51:34.908651633 +0000 UTC m=+1315.174262552" Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.939248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts\") pod \"0c7367f8-1c3f-46c9-8c65-d668b6244622\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.939325 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86k72\" (UniqueName: \"kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72\") pod \"0c7367f8-1c3f-46c9-8c65-d668b6244622\" (UID: \"0c7367f8-1c3f-46c9-8c65-d668b6244622\") " Feb 18 16:51:34 crc kubenswrapper[4812]: I0218 16:51:34.940596 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c7367f8-1c3f-46c9-8c65-d668b6244622" (UID: "0c7367f8-1c3f-46c9-8c65-d668b6244622"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.041018 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c7367f8-1c3f-46c9-8c65-d668b6244622-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.141307 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72" (OuterVolumeSpecName: "kube-api-access-86k72") pod "0c7367f8-1c3f-46c9-8c65-d668b6244622" (UID: "0c7367f8-1c3f-46c9-8c65-d668b6244622"). InnerVolumeSpecName "kube-api-access-86k72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.142033 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86k72\" (UniqueName: \"kubernetes.io/projected/0c7367f8-1c3f-46c9-8c65-d668b6244622-kube-api-access-86k72\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.586567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerStarted","Data":"23aa12d5860605c219f23fb083d2f672cb24343c4d3d1e21628269e100289196"} Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.596501 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"583183458b3e7aba79394809d6d899eca18d47baa13659170ed13b97aaf12fab"} Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.596577 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"b4bda4e5d34eb103be11f9fc928e115fab0126af87edd62748a4c24236c9f7d7"} Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.598686 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jvwjp" event={"ID":"01adb9d2-b3f9-453d-b8a9-d5811235140c","Type":"ContainerStarted","Data":"c2d1ec85aca310e71011839bc3d4e94b3cfe300eebfac4c1f2dd1dbc5f4eabe5"} Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.604902 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cx8kk" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.605599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rwlqt" event={"ID":"f519d561-ebbc-4aff-8d2e-6b98630f5e5f","Type":"ContainerStarted","Data":"9c540727aadab60c26fe77c651ddf1e9610afb24f06c7ddf6ce2ccb5ea15f6c0"} Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.622370 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-jvwjp" podStartSLOduration=3.079279492 podStartE2EDuration="1m35.622351616s" podCreationTimestamp="2026-02-18 16:50:00 +0000 UTC" firstStartedPulling="2026-02-18 16:50:01.583653676 +0000 UTC m=+1221.849264585" lastFinishedPulling="2026-02-18 16:51:34.1267258 +0000 UTC m=+1314.392336709" observedRunningTime="2026-02-18 16:51:35.616502349 +0000 UTC m=+1315.882113278" watchObservedRunningTime="2026-02-18 16:51:35.622351616 +0000 UTC m=+1315.887962525" Feb 18 16:51:35 crc kubenswrapper[4812]: I0218 16:51:35.639333 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rwlqt" podStartSLOduration=3.635787217 podStartE2EDuration="1m36.639314733s" podCreationTimestamp="2026-02-18 16:49:59 +0000 UTC" firstStartedPulling="2026-02-18 16:50:01.122352803 +0000 UTC m=+1221.387963712" lastFinishedPulling="2026-02-18 16:51:34.125880319 +0000 UTC m=+1314.391491228" observedRunningTime="2026-02-18 16:51:35.629588948 +0000 UTC m=+1315.895199867" watchObservedRunningTime="2026-02-18 16:51:35.639314733 +0000 UTC m=+1315.904925642" Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.056129 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.125257 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cx8kk"] Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.139704 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cx8kk"] Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.529041 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7367f8-1c3f-46c9-8c65-d668b6244622" path="/var/lib/kubelet/pods/0c7367f8-1c3f-46c9-8c65-d668b6244622/volumes" Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.617886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerStarted","Data":"75c6fc662a478ed149214543f8e4354228d2b3af3a0e3049014b4b44f14ed00b"} Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.625655 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"dede96776214fcd201ade5c462a09e236a7b1848ebd816bc8170dbc23d6b6971"} Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.625716 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"110260df9b2f5c21ca02aec04c399d5a8e06c4cd86204521e78fd3fcc94c855c"} Feb 18 16:51:36 crc kubenswrapper[4812]: I0218 16:51:36.654114 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=34.654067293 podStartE2EDuration="34.654067293s" podCreationTimestamp="2026-02-18 16:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:51:36.653126229 +0000 UTC m=+1316.918737158" watchObservedRunningTime="2026-02-18 16:51:36.654067293 +0000 UTC m=+1316.919678202" Feb 18 16:51:37 crc kubenswrapper[4812]: I0218 16:51:37.663805 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.326353 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fbhkw"] Feb 18 16:51:40 crc kubenswrapper[4812]: E0218 16:51:40.327028 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7367f8-1c3f-46c9-8c65-d668b6244622" containerName="mariadb-account-create-update" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.327041 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7367f8-1c3f-46c9-8c65-d668b6244622" containerName="mariadb-account-create-update" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.327236 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7367f8-1c3f-46c9-8c65-d668b6244622" containerName="mariadb-account-create-update" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.327848 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.329813 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.334311 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fbhkw"] Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.439016 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb7ch\" (UniqueName: \"kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.439228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.540785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb7ch\" (UniqueName: \"kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.541089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.542263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.564863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb7ch\" (UniqueName: \"kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch\") pod \"root-account-create-update-fbhkw\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.662013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"128c41df2f6917e33a8ce01d4be515ff8cbd22f38f1c17658e62b507b6671469"} Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.662079 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"aea3b4a9f5625edc542d8704f19aff274d86e813add67104963c297652313f53"} Feb 18 16:51:40 crc kubenswrapper[4812]: I0218 16:51:40.699801 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.206477 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fbhkw"] Feb 18 16:51:41 crc kubenswrapper[4812]: W0218 16:51:41.207042 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1572786_4718_408e_95f7_0f599b961109.slice/crio-3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316 WatchSource:0}: Error finding container 3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316: Status 404 returned error can't find the container with id 3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316 Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.213166 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.684818 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"f484468becc0bae69823bdfe16d770113123ee641f2c954e25e1c8101d3a9b91"} Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.684898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"bf430d4076deea14798774964838a718e65229a80da521b620d04d0367e3f3f0"} Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.687311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbhkw" event={"ID":"f1572786-4718-408e-95f7-0f599b961109","Type":"ContainerStarted","Data":"b8a4ce55725ec44a6d6c8f11ed5ea3494cb3b26214670b56ad7e21741430d257"} Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.687375 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbhkw" event={"ID":"f1572786-4718-408e-95f7-0f599b961109","Type":"ContainerStarted","Data":"3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316"} Feb 18 16:51:41 crc kubenswrapper[4812]: I0218 16:51:41.704869 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fbhkw" podStartSLOduration=1.704851381 podStartE2EDuration="1.704851381s" podCreationTimestamp="2026-02-18 16:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:51:41.701026365 +0000 UTC m=+1321.966637274" watchObservedRunningTime="2026-02-18 16:51:41.704851381 +0000 UTC m=+1321.970462290" Feb 18 16:51:45 crc kubenswrapper[4812]: I0218 16:51:45.735266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"60e1ec5b23452809b926a0202ef2e11f6364da039ba8163478e6f6bcfb278e97"} Feb 18 16:51:45 crc kubenswrapper[4812]: I0218 16:51:45.736323 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"e2545892a7da339f331b141945294b8967a601758ff8cb1c183d4f4e50f147a8"} Feb 18 16:51:46 crc kubenswrapper[4812]: I0218 16:51:46.750619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"353767628d64c3db1d57e2821d7cbb9134acf7f09d88508509734f1aec4c2027"} Feb 18 16:51:46 crc kubenswrapper[4812]: I0218 16:51:46.750821 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"cbf6696b6716808b8b3b0b034c735e52d963d6b3a71a6f0b08df676fbb13089c"} Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.664329 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.670652 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.764933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"5e9ee83d09b2f7f9e03332df59a54da5057f2279405db99f775ae69edd6b5428"} Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.764978 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"0aeb2a1f7149675cc6c41eb624b297e09906331f288d172e6981c93843a7d233"} Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.764991 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"795346dc-bc66-461a-bb9e-64991ac27a50","Type":"ContainerStarted","Data":"0e14743af8c843c8ef940fbe1ca9506fc2f8424c51562ad372eb6555f5149f0d"} Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.769802 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 16:51:47 crc kubenswrapper[4812]: I0218 16:51:47.801709 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=125.349028193 podStartE2EDuration="2m32.801685346s" podCreationTimestamp="2026-02-18 16:49:15 +0000 UTC" firstStartedPulling="2026-02-18 16:51:17.588896395 +0000 UTC m=+1297.854507314" lastFinishedPulling="2026-02-18 16:51:45.041553558 +0000 UTC m=+1325.307164467" observedRunningTime="2026-02-18 16:51:47.79706444 +0000 UTC m=+1328.062675349" watchObservedRunningTime="2026-02-18 16:51:47.801685346 +0000 UTC m=+1328.067296265" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.102887 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.105185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.110210 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.127331 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.185617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.185667 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.185791 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vw98\" (UniqueName: \"kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.185819 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.185914 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.186055 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287661 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vw98\" (UniqueName: \"kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.287725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.288402 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.288630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.288679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.288917 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.289020 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.313631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vw98\" (UniqueName: \"kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98\") pod \"dnsmasq-dns-764c5664d7-zpjfn\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.469196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.775752 4812 generic.go:334] "Generic (PLEG): container finished" podID="f1572786-4718-408e-95f7-0f599b961109" containerID="b8a4ce55725ec44a6d6c8f11ed5ea3494cb3b26214670b56ad7e21741430d257" exitCode=0 Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.775857 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbhkw" event={"ID":"f1572786-4718-408e-95f7-0f599b961109","Type":"ContainerDied","Data":"b8a4ce55725ec44a6d6c8f11ed5ea3494cb3b26214670b56ad7e21741430d257"} Feb 18 16:51:48 crc kubenswrapper[4812]: I0218 16:51:48.988553 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:51:48 crc kubenswrapper[4812]: W0218 16:51:48.990374 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod690a0e5e_06a2_4a91_99ba_bb0a405aee7c.slice/crio-cf2934c104500bb8d54ea565a222b545e6944c7ab3e4ec9862a992218c2637fd WatchSource:0}: Error finding container cf2934c104500bb8d54ea565a222b545e6944c7ab3e4ec9862a992218c2637fd: Status 404 returned error can't find the container with id cf2934c104500bb8d54ea565a222b545e6944c7ab3e4ec9862a992218c2637fd Feb 18 16:51:49 crc kubenswrapper[4812]: I0218 16:51:49.788790 4812 generic.go:334] "Generic (PLEG): container finished" podID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerID="e7f4fb81610459e3f8f0e3841b8755f453b35fe340b6ea9342034993a2d343e6" exitCode=0 Feb 18 16:51:49 crc kubenswrapper[4812]: I0218 16:51:49.788852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" event={"ID":"690a0e5e-06a2-4a91-99ba-bb0a405aee7c","Type":"ContainerDied","Data":"e7f4fb81610459e3f8f0e3841b8755f453b35fe340b6ea9342034993a2d343e6"} Feb 18 16:51:49 crc kubenswrapper[4812]: I0218 16:51:49.789181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" event={"ID":"690a0e5e-06a2-4a91-99ba-bb0a405aee7c","Type":"ContainerStarted","Data":"cf2934c104500bb8d54ea565a222b545e6944c7ab3e4ec9862a992218c2637fd"} Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.090171 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.160288 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb7ch\" (UniqueName: \"kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch\") pod \"f1572786-4718-408e-95f7-0f599b961109\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.160336 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts\") pod \"f1572786-4718-408e-95f7-0f599b961109\" (UID: \"f1572786-4718-408e-95f7-0f599b961109\") " Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.160844 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1572786-4718-408e-95f7-0f599b961109" (UID: "f1572786-4718-408e-95f7-0f599b961109"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.168309 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch" (OuterVolumeSpecName: "kube-api-access-lb7ch") pod "f1572786-4718-408e-95f7-0f599b961109" (UID: "f1572786-4718-408e-95f7-0f599b961109"). InnerVolumeSpecName "kube-api-access-lb7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.262950 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb7ch\" (UniqueName: \"kubernetes.io/projected/f1572786-4718-408e-95f7-0f599b961109-kube-api-access-lb7ch\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.262998 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1572786-4718-408e-95f7-0f599b961109-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.797830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fbhkw" event={"ID":"f1572786-4718-408e-95f7-0f599b961109","Type":"ContainerDied","Data":"3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316"} Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.798048 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3901f269181f6e0037388b8fa1a7b951995203a4bf97236ec17527b50f0d0316" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.797895 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fbhkw" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.800209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" event={"ID":"690a0e5e-06a2-4a91-99ba-bb0a405aee7c","Type":"ContainerStarted","Data":"996f000d015708c14e9288c83a8d617b27e91ff5d84bf007cca69345ef95ca02"} Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.801456 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:50 crc kubenswrapper[4812]: I0218 16:51:50.826369 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podStartSLOduration=2.82634383 podStartE2EDuration="2.82634383s" podCreationTimestamp="2026-02-18 16:51:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:51:50.820326359 +0000 UTC m=+1331.085937268" watchObservedRunningTime="2026-02-18 16:51:50.82634383 +0000 UTC m=+1331.091954749" Feb 18 16:51:51 crc kubenswrapper[4812]: I0218 16:51:51.139443 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fbhkw"] Feb 18 16:51:51 crc kubenswrapper[4812]: I0218 16:51:51.147694 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fbhkw"] Feb 18 16:51:52 crc kubenswrapper[4812]: I0218 16:51:52.518273 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1572786-4718-408e-95f7-0f599b961109" path="/var/lib/kubelet/pods/f1572786-4718-408e-95f7-0f599b961109/volumes" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.330431 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q2j89"] Feb 18 16:51:55 crc kubenswrapper[4812]: E0218 16:51:55.331202 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1572786-4718-408e-95f7-0f599b961109" containerName="mariadb-account-create-update" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.331220 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1572786-4718-408e-95f7-0f599b961109" containerName="mariadb-account-create-update" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.331418 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1572786-4718-408e-95f7-0f599b961109" containerName="mariadb-account-create-update" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.332031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.335284 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.341801 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q2j89"] Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.350950 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nws7h\" (UniqueName: \"kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.350994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.452543 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nws7h\" (UniqueName: \"kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.452613 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.453753 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.472437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nws7h\" (UniqueName: \"kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h\") pod \"root-account-create-update-q2j89\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:55 crc kubenswrapper[4812]: I0218 16:51:55.649210 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.116037 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q2j89"] Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.855118 4812 generic.go:334] "Generic (PLEG): container finished" podID="01adb9d2-b3f9-453d-b8a9-d5811235140c" containerID="c2d1ec85aca310e71011839bc3d4e94b3cfe300eebfac4c1f2dd1dbc5f4eabe5" exitCode=0 Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.855179 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jvwjp" event={"ID":"01adb9d2-b3f9-453d-b8a9-d5811235140c","Type":"ContainerDied","Data":"c2d1ec85aca310e71011839bc3d4e94b3cfe300eebfac4c1f2dd1dbc5f4eabe5"} Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.857636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2j89" event={"ID":"a69c7d0a-f63f-4e81-9d76-dfa93ba16172","Type":"ContainerStarted","Data":"ad5a9dba1a1c364094bd4818e563c66c2eced3374641c1af9880d2ec022d3586"} Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.857688 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2j89" event={"ID":"a69c7d0a-f63f-4e81-9d76-dfa93ba16172","Type":"ContainerStarted","Data":"4ef60cdb1be6e284fc9c62d8f1824aacac9ce7366e8604c53e38f2ca287c803f"} Feb 18 16:51:56 crc kubenswrapper[4812]: I0218 16:51:56.906930 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-q2j89" podStartSLOduration=1.906896325 podStartE2EDuration="1.906896325s" podCreationTimestamp="2026-02-18 16:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:51:56.906044884 +0000 UTC m=+1337.171655793" watchObservedRunningTime="2026-02-18 16:51:56.906896325 +0000 UTC m=+1337.172507234" Feb 18 16:51:57 crc kubenswrapper[4812]: I0218 16:51:57.866770 4812 generic.go:334] "Generic (PLEG): container finished" podID="a69c7d0a-f63f-4e81-9d76-dfa93ba16172" containerID="ad5a9dba1a1c364094bd4818e563c66c2eced3374641c1af9880d2ec022d3586" exitCode=0 Feb 18 16:51:57 crc kubenswrapper[4812]: I0218 16:51:57.866831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2j89" event={"ID":"a69c7d0a-f63f-4e81-9d76-dfa93ba16172","Type":"ContainerDied","Data":"ad5a9dba1a1c364094bd4818e563c66c2eced3374641c1af9880d2ec022d3586"} Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.181455 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.204681 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle\") pod \"01adb9d2-b3f9-453d-b8a9-d5811235140c\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.204755 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb7d7\" (UniqueName: \"kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7\") pod \"01adb9d2-b3f9-453d-b8a9-d5811235140c\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.204861 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data\") pod \"01adb9d2-b3f9-453d-b8a9-d5811235140c\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.204923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data\") pod \"01adb9d2-b3f9-453d-b8a9-d5811235140c\" (UID: \"01adb9d2-b3f9-453d-b8a9-d5811235140c\") " Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.239592 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "01adb9d2-b3f9-453d-b8a9-d5811235140c" (UID: "01adb9d2-b3f9-453d-b8a9-d5811235140c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.240929 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7" (OuterVolumeSpecName: "kube-api-access-rb7d7") pod "01adb9d2-b3f9-453d-b8a9-d5811235140c" (UID: "01adb9d2-b3f9-453d-b8a9-d5811235140c"). InnerVolumeSpecName "kube-api-access-rb7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.242536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01adb9d2-b3f9-453d-b8a9-d5811235140c" (UID: "01adb9d2-b3f9-453d-b8a9-d5811235140c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.261441 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data" (OuterVolumeSpecName: "config-data") pod "01adb9d2-b3f9-453d-b8a9-d5811235140c" (UID: "01adb9d2-b3f9-453d-b8a9-d5811235140c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.307290 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.307323 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb7d7\" (UniqueName: \"kubernetes.io/projected/01adb9d2-b3f9-453d-b8a9-d5811235140c-kube-api-access-rb7d7\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.307334 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.307343 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01adb9d2-b3f9-453d-b8a9-d5811235140c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.470355 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.535781 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.536027 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-55brp" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="dnsmasq-dns" containerID="cri-o://d14b0fdb25b8c50a15d75eb0e8d43639a0ff0d78b52bdb7079d14cab7a8a33f5" gracePeriod=10 Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.891712 4812 generic.go:334] "Generic (PLEG): container finished" podID="c83289ef-3638-4586-9801-be6e91d900d2" containerID="d14b0fdb25b8c50a15d75eb0e8d43639a0ff0d78b52bdb7079d14cab7a8a33f5" exitCode=0 Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.891824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerDied","Data":"d14b0fdb25b8c50a15d75eb0e8d43639a0ff0d78b52bdb7079d14cab7a8a33f5"} Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.902718 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-jvwjp" Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.903153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-jvwjp" event={"ID":"01adb9d2-b3f9-453d-b8a9-d5811235140c","Type":"ContainerDied","Data":"f0bd4fac58a150667d139a412fffae39e5dd021cedcd6e7eebab69d1750b5468"} Feb 18 16:51:58 crc kubenswrapper[4812]: I0218 16:51:58.903188 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0bd4fac58a150667d139a412fffae39e5dd021cedcd6e7eebab69d1750b5468" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.206827 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.229947 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb\") pod \"c83289ef-3638-4586-9801-be6e91d900d2\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.230025 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc\") pod \"c83289ef-3638-4586-9801-be6e91d900d2\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.230079 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl528\" (UniqueName: \"kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528\") pod \"c83289ef-3638-4586-9801-be6e91d900d2\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.230140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb\") pod \"c83289ef-3638-4586-9801-be6e91d900d2\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.230320 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config\") pod \"c83289ef-3638-4586-9801-be6e91d900d2\" (UID: \"c83289ef-3638-4586-9801-be6e91d900d2\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.239567 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528" (OuterVolumeSpecName: "kube-api-access-rl528") pod "c83289ef-3638-4586-9801-be6e91d900d2" (UID: "c83289ef-3638-4586-9801-be6e91d900d2"). InnerVolumeSpecName "kube-api-access-rl528". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.285441 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c83289ef-3638-4586-9801-be6e91d900d2" (UID: "c83289ef-3638-4586-9801-be6e91d900d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.328905 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c83289ef-3638-4586-9801-be6e91d900d2" (UID: "c83289ef-3638-4586-9801-be6e91d900d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.332397 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.332430 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl528\" (UniqueName: \"kubernetes.io/projected/c83289ef-3638-4586-9801-be6e91d900d2-kube-api-access-rl528\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.332441 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.358909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c83289ef-3638-4586-9801-be6e91d900d2" (UID: "c83289ef-3638-4586-9801-be6e91d900d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.360073 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config" (OuterVolumeSpecName: "config") pod "c83289ef-3638-4586-9801-be6e91d900d2" (UID: "c83289ef-3638-4586-9801-be6e91d900d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.368529 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.433265 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nws7h\" (UniqueName: \"kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h\") pod \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.433523 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts\") pod \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\" (UID: \"a69c7d0a-f63f-4e81-9d76-dfa93ba16172\") " Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.433851 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.433870 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c83289ef-3638-4586-9801-be6e91d900d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.434146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a69c7d0a-f63f-4e81-9d76-dfa93ba16172" (UID: "a69c7d0a-f63f-4e81-9d76-dfa93ba16172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.438359 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h" (OuterVolumeSpecName: "kube-api-access-nws7h") pod "a69c7d0a-f63f-4e81-9d76-dfa93ba16172" (UID: "a69c7d0a-f63f-4e81-9d76-dfa93ba16172"). InnerVolumeSpecName "kube-api-access-nws7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.535311 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.535341 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nws7h\" (UniqueName: \"kubernetes.io/projected/a69c7d0a-f63f-4e81-9d76-dfa93ba16172-kube-api-access-nws7h\") on node \"crc\" DevicePath \"\"" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.912629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-55brp" event={"ID":"c83289ef-3638-4586-9801-be6e91d900d2","Type":"ContainerDied","Data":"af660a7a85deb68fcb6ef7c9e1799aa27025867732343117480bd200a4ccc4a6"} Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.912653 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-55brp" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.912690 4812 scope.go:117] "RemoveContainer" containerID="d14b0fdb25b8c50a15d75eb0e8d43639a0ff0d78b52bdb7079d14cab7a8a33f5" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.914604 4812 generic.go:334] "Generic (PLEG): container finished" podID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" containerID="9c540727aadab60c26fe77c651ddf1e9610afb24f06c7ddf6ce2ccb5ea15f6c0" exitCode=0 Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.914869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rwlqt" event={"ID":"f519d561-ebbc-4aff-8d2e-6b98630f5e5f","Type":"ContainerDied","Data":"9c540727aadab60c26fe77c651ddf1e9610afb24f06c7ddf6ce2ccb5ea15f6c0"} Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.916326 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q2j89" event={"ID":"a69c7d0a-f63f-4e81-9d76-dfa93ba16172","Type":"ContainerDied","Data":"4ef60cdb1be6e284fc9c62d8f1824aacac9ce7366e8604c53e38f2ca287c803f"} Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.916352 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef60cdb1be6e284fc9c62d8f1824aacac9ce7366e8604c53e38f2ca287c803f" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.916387 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q2j89" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.940047 4812 scope.go:117] "RemoveContainer" containerID="7e79de5741095c068fb13fa3a43c6da43b43c080bc6a78b500e76b8d4ed96090" Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.964853 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:51:59 crc kubenswrapper[4812]: I0218 16:51:59.974555 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-55brp"] Feb 18 16:52:00 crc kubenswrapper[4812]: I0218 16:52:00.519655 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83289ef-3638-4586-9801-be6e91d900d2" path="/var/lib/kubelet/pods/c83289ef-3638-4586-9801-be6e91d900d2/volumes" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.165277 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q2j89"] Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.174042 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q2j89"] Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.220008 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.264876 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle\") pod \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.264930 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data\") pod \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.265017 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kztrj\" (UniqueName: \"kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj\") pod \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\" (UID: \"f519d561-ebbc-4aff-8d2e-6b98630f5e5f\") " Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.270414 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj" (OuterVolumeSpecName: "kube-api-access-kztrj") pod "f519d561-ebbc-4aff-8d2e-6b98630f5e5f" (UID: "f519d561-ebbc-4aff-8d2e-6b98630f5e5f"). InnerVolumeSpecName "kube-api-access-kztrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.291346 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f519d561-ebbc-4aff-8d2e-6b98630f5e5f" (UID: "f519d561-ebbc-4aff-8d2e-6b98630f5e5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.317948 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data" (OuterVolumeSpecName: "config-data") pod "f519d561-ebbc-4aff-8d2e-6b98630f5e5f" (UID: "f519d561-ebbc-4aff-8d2e-6b98630f5e5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.366533 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.366569 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.366583 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kztrj\" (UniqueName: \"kubernetes.io/projected/f519d561-ebbc-4aff-8d2e-6b98630f5e5f-kube-api-access-kztrj\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.934318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rwlqt" event={"ID":"f519d561-ebbc-4aff-8d2e-6b98630f5e5f","Type":"ContainerDied","Data":"927f7d9936c8f52bf21be74e4add308111bfa60ae6f0ef6109338a79f413147d"} Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.934359 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927f7d9936c8f52bf21be74e4add308111bfa60ae6f0ef6109338a79f413147d" Feb 18 16:52:01 crc kubenswrapper[4812]: I0218 16:52:01.934374 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rwlqt" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.178617 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:02 crc kubenswrapper[4812]: E0218 16:52:02.179302 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" containerName="keystone-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179314 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" containerName="keystone-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: E0218 16:52:02.179329 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="init" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179335 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="init" Feb 18 16:52:02 crc kubenswrapper[4812]: E0218 16:52:02.179352 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" containerName="watcher-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179358 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" containerName="watcher-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: E0218 16:52:02.179367 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69c7d0a-f63f-4e81-9d76-dfa93ba16172" containerName="mariadb-account-create-update" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179373 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69c7d0a-f63f-4e81-9d76-dfa93ba16172" containerName="mariadb-account-create-update" Feb 18 16:52:02 crc kubenswrapper[4812]: E0218 16:52:02.179388 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="dnsmasq-dns" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179394 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="dnsmasq-dns" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179559 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" containerName="watcher-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179582 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" containerName="keystone-db-sync" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179589 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69c7d0a-f63f-4e81-9d76-dfa93ba16172" containerName="mariadb-account-create-update" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.179598 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83289ef-3638-4586-9801-be6e91d900d2" containerName="dnsmasq-dns" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.189250 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.231154 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283366 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqrg\" (UniqueName: \"kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283457 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283542 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.283591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.333169 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n5mhx"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.334563 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.346876 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.347085 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.362829 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n5mhx"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.374485 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-828k4" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.374733 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.374907 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.393047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn68m\" (UniqueName: \"kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.393114 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.393145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqrg\" (UniqueName: \"kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.393191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394189 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394234 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394455 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394505 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394595 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.394815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.395301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.395444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.436003 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqrg\" (UniqueName: \"kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg\") pod \"dnsmasq-dns-5959f8865f-qsprh\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.470335 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.471708 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.483431 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496127 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496195 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496260 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496287 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496334 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwfhm\" (UniqueName: \"kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496354 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496421 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn68m\" (UniqueName: \"kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.496520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.509386 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.514832 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.515246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.525902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.532704 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.532882 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-k94lt" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.562617 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwfhm\" (UniqueName: \"kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598405 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.598871 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.622881 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.635318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.650291 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.680047 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn68m\" (UniqueName: \"kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m\") pod \"keystone-bootstrap-n5mhx\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.720975 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwfhm\" (UniqueName: \"kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm\") pod \"watcher-decision-engine-0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.779333 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69c7d0a-f63f-4e81-9d76-dfa93ba16172" path="/var/lib/kubelet/pods/a69c7d0a-f63f-4e81-9d76-dfa93ba16172/volumes" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.780268 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.811205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.811643 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.812813 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.812839 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.812924 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.820828 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-284h9"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.821999 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.822017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.824447 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.826076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.826139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.826184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7kr\" (UniqueName: \"kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.826960 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.831198 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.831478 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7hnct" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.831605 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.837672 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.842461 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.842561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.842592 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzn75\" (UniqueName: \"kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.842876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.842908 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.843400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.847943 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-284h9"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.881767 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.885660 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.895229 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.895439 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-nw8tm" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.895628 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.895782 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.918400 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.945671 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tv8tp"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954694 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954713 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkqb\" (UniqueName: \"kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954754 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954842 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7kr\" (UniqueName: \"kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954932 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954949 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.954988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955006 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97cr\" (UniqueName: \"kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzn75\" (UniqueName: \"kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955118 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955150 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.955212 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.964478 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tv8tp"] Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.969266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.970967 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.971320 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.973974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.974366 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.979283 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.983710 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.991792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.992454 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.994449 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 16:52:02 crc kubenswrapper[4812]: I0218 16:52:02.994753 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-www47" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.005863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.025538 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.028865 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.032946 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.034024 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.036677 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7kr\" (UniqueName: \"kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr\") pod \"watcher-applier-0\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " pod="openstack/watcher-applier-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.045977 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7tnx6"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.047426 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.051351 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzn75\" (UniqueName: \"kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75\") pod \"watcher-api-0\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " pod="openstack/watcher-api-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.054771 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.055358 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xjzsl" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.055537 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqq6\" (UniqueName: \"kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059219 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059294 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059556 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29l7q\" (UniqueName: \"kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059627 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97cr\" (UniqueName: \"kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059694 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059765 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.059943 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060177 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkqb\" (UniqueName: \"kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060508 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.060800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.064801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.070724 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.071788 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.073136 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.074668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.077082 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.088595 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.092793 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.103207 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.107209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.117936 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tnx6"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.122765 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.129589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97cr\" (UniqueName: \"kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr\") pod \"horizon-7b446b4fcc-sxzl8\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.143447 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkqb\" (UniqueName: \"kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb\") pod \"cinder-db-sync-284h9\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.150011 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.151498 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.165877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.165929 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.165961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.165994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166030 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqq6\" (UniqueName: \"kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166071 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166168 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29l7q\" (UniqueName: \"kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166191 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166212 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166236 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.166266 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p84c\" (UniqueName: \"kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.170431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.170844 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.171502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.172192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.172847 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.177247 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.181805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.182624 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.184890 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.186487 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.196898 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.197840 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.218285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.223876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqq6\" (UniqueName: \"kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6\") pod \"neutron-db-sync-tv8tp\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.224027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29l7q\" (UniqueName: \"kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q\") pod \"ceilometer-0\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.235766 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.242036 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tzm7v"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.244502 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.246691 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lhsv5" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.247000 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267879 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267906 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2c4h\" (UniqueName: \"kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267964 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.267995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268053 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268072 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268115 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7mh\" (UniqueName: \"kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268176 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268233 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p84c\" (UniqueName: \"kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.268873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.269712 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tzm7v"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.271917 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-284h9" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.274150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.282667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.292302 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.300597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p84c\" (UniqueName: \"kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c\") pod \"placement-db-sync-7tnx6\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.301048 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.345983 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.369992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnd5l\" (UniqueName: \"kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370206 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7mh\" (UniqueName: \"kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370294 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370337 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370431 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.370456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2c4h\" (UniqueName: \"kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.377303 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.377592 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.379006 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.383908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.384682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.385154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.385273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.385899 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.386080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.401743 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.405703 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnx6" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.406538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7mh\" (UniqueName: \"kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh\") pod \"dnsmasq-dns-58dd9ff6bc-wl988\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.408823 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2c4h\" (UniqueName: \"kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h\") pod \"horizon-f74fd4697-6xbvv\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.413996 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.414043 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.414087 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.414793 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.414838 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968" gracePeriod=600 Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.474835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnd5l\" (UniqueName: \"kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.474896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.474992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.495371 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.495762 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.501596 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.543573 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.649402 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.766458 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.771509 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnd5l\" (UniqueName: \"kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l\") pod \"barbican-db-sync-tzm7v\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.787410 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n5mhx"] Feb 18 16:52:03 crc kubenswrapper[4812]: W0218 16:52:03.806039 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f7b80c_6652_46df_b6ee_75698ae1a9b5.slice/crio-ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a WatchSource:0}: Error finding container ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a: Status 404 returned error can't find the container with id ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a Feb 18 16:52:03 crc kubenswrapper[4812]: I0218 16:52:03.902177 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.037247 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" event={"ID":"681cca54-2e71-4b73-8b42-5dbdeb3ef465","Type":"ContainerStarted","Data":"8b8d5b20f8ebb93bc37c371f874593d8bbdd50f551410394867d8019a796c299"} Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.042969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a2c86f83-422d-46f2-942d-608f3afacaa0","Type":"ContainerStarted","Data":"7706016e7d230f4beac91727a801d9dfeee7ffbd3e73017ff7710b40d9741ec4"} Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.046502 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n5mhx" event={"ID":"28f7b80c-6652-46df-b6ee-75698ae1a9b5","Type":"ContainerStarted","Data":"ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a"} Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.107183 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.129023 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.278791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tv8tp"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.290506 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.305162 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.507306 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.537949 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.537996 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-284h9"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.680065 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7tnx6"] Feb 18 16:52:04 crc kubenswrapper[4812]: I0218 16:52:04.687952 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tzm7v"] Feb 18 16:52:04 crc kubenswrapper[4812]: W0218 16:52:04.824887 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod432ecdb1_393e_4454_a386_3134c792b4cc.slice/crio-d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2 WatchSource:0}: Error finding container d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2: Status 404 returned error can't find the container with id d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2 Feb 18 16:52:04 crc kubenswrapper[4812]: W0218 16:52:04.831154 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bdd340b_a57b_435b_b34b_a47c31b54c79.slice/crio-4bb14630cd545dbe03086192c3a6cc2cac27f1df74118084d0e0213429efd09a WatchSource:0}: Error finding container 4bb14630cd545dbe03086192c3a6cc2cac27f1df74118084d0e0213429efd09a: Status 404 returned error can't find the container with id 4bb14630cd545dbe03086192c3a6cc2cac27f1df74118084d0e0213429efd09a Feb 18 16:52:04 crc kubenswrapper[4812]: W0218 16:52:04.832630 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8882306_a365_4ee4_adf2_e672b20ad942.slice/crio-554e72ca644cb221445c9577a7bb13b8192bec4195b95acb4b1d23336baf1ac4 WatchSource:0}: Error finding container 554e72ca644cb221445c9577a7bb13b8192bec4195b95acb4b1d23336baf1ac4: Status 404 returned error can't find the container with id 554e72ca644cb221445c9577a7bb13b8192bec4195b95acb4b1d23336baf1ac4 Feb 18 16:52:04 crc kubenswrapper[4812]: W0218 16:52:04.848151 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b87b144_e1c5_4d51_b6f1_6896913188d1.slice/crio-bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2 WatchSource:0}: Error finding container bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2: Status 404 returned error can't find the container with id bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2 Feb 18 16:52:04 crc kubenswrapper[4812]: W0218 16:52:04.857554 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09eb0e05_320a_463b_85cd_e1e387bb2610.slice/crio-819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2 WatchSource:0}: Error finding container 819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2: Status 404 returned error can't find the container with id 819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2 Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.063371 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968" exitCode=0 Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.063934 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.063983 4812 scope.go:117] "RemoveContainer" containerID="6694fe6cf00604d7bf699da255b5f4ee7bbb368633e5806d39ece05dac043369" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.079435 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnx6" event={"ID":"09eb0e05-320a-463b-85cd-e1e387bb2610","Type":"ContainerStarted","Data":"819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.106658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tv8tp" event={"ID":"432ecdb1-393e-4454-a386-3134c792b4cc","Type":"ContainerStarted","Data":"d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.108575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bdd340b-a57b-435b-b34b-a47c31b54c79","Type":"ContainerStarted","Data":"4bb14630cd545dbe03086192c3a6cc2cac27f1df74118084d0e0213429efd09a"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.111460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"19d13bf1-b3dc-405a-9240-6133d293f08a","Type":"ContainerStarted","Data":"bee5058fc9f09a26018462a316e284c23beff8bb749a2c9c566522f3aab5ec55"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.113042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerStarted","Data":"372bae168d1ea9394e00e3d7eecd74dec25775d46e4b8fb6e33110ee61e386a2"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.116393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-284h9" event={"ID":"4b87b144-e1c5-4d51-b6f1-6896913188d1","Type":"ContainerStarted","Data":"bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.128672 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n5mhx" event={"ID":"28f7b80c-6652-46df-b6ee-75698ae1a9b5","Type":"ContainerStarted","Data":"55922bb314ae33f937d717d091601c14bc8e2381bef35fc93ef58177ea84e5dc"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.135294 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" event={"ID":"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b","Type":"ContainerStarted","Data":"7ab0d53c1dc854f679a4093b3fabb864f2ab3cca5c4ca4496ba7fcb29e640d92"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.138236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" event={"ID":"681cca54-2e71-4b73-8b42-5dbdeb3ef465","Type":"ContainerStarted","Data":"58c3bb2d808cac51a4dd5fb1659f687e983b6a8c75733222b2b4d6c120ab438e"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.139767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f74fd4697-6xbvv" event={"ID":"f6891569-1a5b-4739-93dd-48bfc3924518","Type":"ContainerStarted","Data":"1feeabe5c8fdf35b47dcb042031d969d7b3f6f15bd76dd53b6d834200449efcc"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.148324 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerStarted","Data":"554e72ca644cb221445c9577a7bb13b8192bec4195b95acb4b1d23336baf1ac4"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.154511 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tzm7v" event={"ID":"c32f52a7-3dab-42c3-b32d-ae230861ae69","Type":"ContainerStarted","Data":"adfa50c8dd97dff1e33bbae959b9512757c0537ed725be58b08a839f48768702"} Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.214613 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.273337 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.319678 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.321864 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.345167 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.360538 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.405650 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cdlpg"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.406911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.409675 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.435585 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cdlpg"] Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.449217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.449288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.449309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.449327 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.449369 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjd8\" (UniqueName: \"kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.554684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.554737 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nvd\" (UniqueName: \"kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.554924 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.555079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.555147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.555173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.555289 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjd8\" (UniqueName: \"kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.556445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.556548 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.557734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.560275 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.574687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjd8\" (UniqueName: \"kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8\") pod \"horizon-55ff894c77-sfq6d\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.656836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.656889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nvd\" (UniqueName: \"kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.658149 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.676514 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.676830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nvd\" (UniqueName: \"kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd\") pod \"root-account-create-update-cdlpg\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:05 crc kubenswrapper[4812]: I0218 16:52:05.734727 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.190579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerStarted","Data":"95270089151b1b7ad156d75225071044ef1719a15343d29668f8669d85767d81"} Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.209425 4812 generic.go:334] "Generic (PLEG): container finished" podID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerID="02fa8094d3b5e53f1c4a28c8b812a05acc100a4bc38306f81708ba4c48c646d9" exitCode=0 Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.209477 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" event={"ID":"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b","Type":"ContainerDied","Data":"02fa8094d3b5e53f1c4a28c8b812a05acc100a4bc38306f81708ba4c48c646d9"} Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.229939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tv8tp" event={"ID":"432ecdb1-393e-4454-a386-3134c792b4cc","Type":"ContainerStarted","Data":"900ac1e7823b61cb43d8b708767522ffe936d94978851eab7a706aeb5300b2d6"} Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.244083 4812 generic.go:334] "Generic (PLEG): container finished" podID="681cca54-2e71-4b73-8b42-5dbdeb3ef465" containerID="58c3bb2d808cac51a4dd5fb1659f687e983b6a8c75733222b2b4d6c120ab438e" exitCode=0 Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.244224 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" event={"ID":"681cca54-2e71-4b73-8b42-5dbdeb3ef465","Type":"ContainerDied","Data":"58c3bb2d808cac51a4dd5fb1659f687e983b6a8c75733222b2b4d6c120ab438e"} Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.254919 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tv8tp" podStartSLOduration=4.254904973 podStartE2EDuration="4.254904973s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:52:06.251688132 +0000 UTC m=+1346.517299061" watchObservedRunningTime="2026-02-18 16:52:06.254904973 +0000 UTC m=+1346.520515882" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.294260 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.314814 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n5mhx" podStartSLOduration=4.314786782 podStartE2EDuration="4.314786782s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:52:06.287988157 +0000 UTC m=+1346.553599066" watchObservedRunningTime="2026-02-18 16:52:06.314786782 +0000 UTC m=+1346.580397691" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.462829 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cdlpg"] Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.758664 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.890950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.891009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.891114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.891245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtqrg\" (UniqueName: \"kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.891301 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.891354 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0\") pod \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\" (UID: \"681cca54-2e71-4b73-8b42-5dbdeb3ef465\") " Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.935387 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg" (OuterVolumeSpecName: "kube-api-access-dtqrg") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "kube-api-access-dtqrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.945072 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.945649 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.951579 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.958091 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.994300 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.994339 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.994351 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.994360 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtqrg\" (UniqueName: \"kubernetes.io/projected/681cca54-2e71-4b73-8b42-5dbdeb3ef465-kube-api-access-dtqrg\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:06 crc kubenswrapper[4812]: I0218 16:52:06.994369 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.013809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config" (OuterVolumeSpecName: "config") pod "681cca54-2e71-4b73-8b42-5dbdeb3ef465" (UID: "681cca54-2e71-4b73-8b42-5dbdeb3ef465"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.095942 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681cca54-2e71-4b73-8b42-5dbdeb3ef465-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.271160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerStarted","Data":"45283a95d8f0b464f84530475c76f2c34ef1712da71083d1dffab97ec841b91e"} Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.278258 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" event={"ID":"681cca54-2e71-4b73-8b42-5dbdeb3ef465","Type":"ContainerDied","Data":"8b8d5b20f8ebb93bc37c371f874593d8bbdd50f551410394867d8019a796c299"} Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.278338 4812 scope.go:117] "RemoveContainer" containerID="58c3bb2d808cac51a4dd5fb1659f687e983b6a8c75733222b2b4d6c120ab438e" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.278373 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-qsprh" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.282757 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerStarted","Data":"8d97e9ec95264e41fcec8f23446ea57f017e2aee1499af9001ca28436aa22f70"} Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.282921 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api-log" containerID="cri-o://95270089151b1b7ad156d75225071044ef1719a15343d29668f8669d85767d81" gracePeriod=30 Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.284987 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" containerID="cri-o://8d97e9ec95264e41fcec8f23446ea57f017e2aee1499af9001ca28436aa22f70" gracePeriod=30 Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.285156 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.291953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cdlpg" event={"ID":"dc8808f5-c824-459a-9642-da198070084e","Type":"ContainerStarted","Data":"b06526ffe21db33b20ae938140ca09863e6e38f792c615376279ce5cf26e1b4a"} Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.302316 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.308579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c"} Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.344586 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.34455336 podStartE2EDuration="5.34455336s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:52:07.315138969 +0000 UTC m=+1347.580749888" watchObservedRunningTime="2026-02-18 16:52:07.34455336 +0000 UTC m=+1347.610164269" Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.492236 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:07 crc kubenswrapper[4812]: I0218 16:52:07.500410 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-qsprh"] Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.198462 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.325961 4812 generic.go:334] "Generic (PLEG): container finished" podID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerID="95270089151b1b7ad156d75225071044ef1719a15343d29668f8669d85767d81" exitCode=143 Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.326020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerDied","Data":"95270089151b1b7ad156d75225071044ef1719a15343d29668f8669d85767d81"} Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.330206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cdlpg" event={"ID":"dc8808f5-c824-459a-9642-da198070084e","Type":"ContainerStarted","Data":"bb981a9dca7ffa18247c29b2e1f90811bbf37a43abd423a124484f6d00da0ebc"} Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.342663 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" event={"ID":"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b","Type":"ContainerStarted","Data":"fd13e883e659f33527c78b0ba6e202d1084e2b596142a56e05824fbad32c56cf"} Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.360045 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cdlpg" podStartSLOduration=3.360021947 podStartE2EDuration="3.360021947s" podCreationTimestamp="2026-02-18 16:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:52:08.348308771 +0000 UTC m=+1348.613919690" watchObservedRunningTime="2026-02-18 16:52:08.360021947 +0000 UTC m=+1348.625632856" Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.529702 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="681cca54-2e71-4b73-8b42-5dbdeb3ef465" path="/var/lib/kubelet/pods/681cca54-2e71-4b73-8b42-5dbdeb3ef465/volumes" Feb 18 16:52:08 crc kubenswrapper[4812]: I0218 16:52:08.548402 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:10 crc kubenswrapper[4812]: I0218 16:52:10.541582 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" podStartSLOduration=8.541564617 podStartE2EDuration="8.541564617s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:52:08.373989599 +0000 UTC m=+1348.639600518" watchObservedRunningTime="2026-02-18 16:52:10.541564617 +0000 UTC m=+1350.807175526" Feb 18 16:52:11 crc kubenswrapper[4812]: I0218 16:52:11.580762 4812 generic.go:334] "Generic (PLEG): container finished" podID="dc8808f5-c824-459a-9642-da198070084e" containerID="bb981a9dca7ffa18247c29b2e1f90811bbf37a43abd423a124484f6d00da0ebc" exitCode=0 Feb 18 16:52:11 crc kubenswrapper[4812]: I0218 16:52:11.580888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cdlpg" event={"ID":"dc8808f5-c824-459a-9642-da198070084e","Type":"ContainerDied","Data":"bb981a9dca7ffa18247c29b2e1f90811bbf37a43abd423a124484f6d00da0ebc"} Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.905631 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.950634 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:52:12 crc kubenswrapper[4812]: E0218 16:52:12.956308 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681cca54-2e71-4b73-8b42-5dbdeb3ef465" containerName="init" Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.956366 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="681cca54-2e71-4b73-8b42-5dbdeb3ef465" containerName="init" Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.956847 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="681cca54-2e71-4b73-8b42-5dbdeb3ef465" containerName="init" Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.960883 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.967418 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 16:52:12 crc kubenswrapper[4812]: I0218 16:52:12.983025 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.041724 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042694 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrfpj\" (UniqueName: \"kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042782 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042837 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.042920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.090186 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-dbd455b84-x6fxk"] Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.092037 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.119850 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dbd455b84-x6fxk"] Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171351 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrfpj\" (UniqueName: \"kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171434 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171572 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.171862 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.173435 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.174689 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.183668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.184587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.189563 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.202851 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.226956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrfpj\" (UniqueName: \"kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj\") pod \"horizon-544c585488-4dbfm\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.247879 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.275170 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-combined-ca-bundle\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-secret-key\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276569 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-logs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-config-data\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-tls-certs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276659 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqs2\" (UniqueName: \"kubernetes.io/projected/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-kube-api-access-ngqs2\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.276743 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-scripts\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.307016 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378767 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-scripts\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378834 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-combined-ca-bundle\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378918 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-secret-key\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-logs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378964 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-config-data\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.378986 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-tls-certs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.379009 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqs2\" (UniqueName: \"kubernetes.io/projected/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-kube-api-access-ngqs2\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.400453 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-scripts\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.406405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-config-data\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.408616 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-tls-certs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.409961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-combined-ca-bundle\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.420961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-horizon-secret-key\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.429776 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqs2\" (UniqueName: \"kubernetes.io/projected/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-kube-api-access-ngqs2\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.546325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.604391 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.604650 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" containerID="cri-o://996f000d015708c14e9288c83a8d617b27e91ff5d84bf007cca69345ef95ca02" gracePeriod=10 Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.615071 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11a958b4-3c26-4d73-acfa-fb3fb4c08cb2-logs\") pod \"horizon-dbd455b84-x6fxk\" (UID: \"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2\") " pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:13 crc kubenswrapper[4812]: I0218 16:52:13.730057 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:52:14 crc kubenswrapper[4812]: I0218 16:52:14.718570 4812 generic.go:334] "Generic (PLEG): container finished" podID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerID="996f000d015708c14e9288c83a8d617b27e91ff5d84bf007cca69345ef95ca02" exitCode=0 Feb 18 16:52:14 crc kubenswrapper[4812]: I0218 16:52:14.718646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" event={"ID":"690a0e5e-06a2-4a91-99ba-bb0a405aee7c","Type":"ContainerDied","Data":"996f000d015708c14e9288c83a8d617b27e91ff5d84bf007cca69345ef95ca02"} Feb 18 16:52:14 crc kubenswrapper[4812]: I0218 16:52:14.725337 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:34254->10.217.0.152:9322: read: connection reset by peer" Feb 18 16:52:15 crc kubenswrapper[4812]: I0218 16:52:15.730795 4812 generic.go:334] "Generic (PLEG): container finished" podID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerID="8d97e9ec95264e41fcec8f23446ea57f017e2aee1499af9001ca28436aa22f70" exitCode=0 Feb 18 16:52:15 crc kubenswrapper[4812]: I0218 16:52:15.730861 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerDied","Data":"8d97e9ec95264e41fcec8f23446ea57f017e2aee1499af9001ca28436aa22f70"} Feb 18 16:52:18 crc kubenswrapper[4812]: I0218 16:52:18.199409 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 18 16:52:18 crc kubenswrapper[4812]: I0218 16:52:18.470238 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Feb 18 16:52:18 crc kubenswrapper[4812]: I0218 16:52:18.767983 4812 generic.go:334] "Generic (PLEG): container finished" podID="28f7b80c-6652-46df-b6ee-75698ae1a9b5" containerID="55922bb314ae33f937d717d091601c14bc8e2381bef35fc93ef58177ea84e5dc" exitCode=0 Feb 18 16:52:18 crc kubenswrapper[4812]: I0218 16:52:18.768040 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n5mhx" event={"ID":"28f7b80c-6652-46df-b6ee-75698ae1a9b5","Type":"ContainerDied","Data":"55922bb314ae33f937d717d091601c14bc8e2381bef35fc93ef58177ea84e5dc"} Feb 18 16:52:20 crc kubenswrapper[4812]: E0218 16:52:20.131247 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 16:52:20 crc kubenswrapper[4812]: E0218 16:52:20.131692 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n655h56h59dh695h67ch67bh658h57fh57bh4h64dh5b4h68dhc6hc6hc5h577h65fh599h579h89h4h695h5f4h679h5d7h7fh67h64h554h676h55fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2c4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f74fd4697-6xbvv_openstack(f6891569-1a5b-4739-93dd-48bfc3924518): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:52:20 crc kubenswrapper[4812]: E0218 16:52:20.393862 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f74fd4697-6xbvv" podUID="f6891569-1a5b-4739-93dd-48bfc3924518" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.496525 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.533559 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nvd\" (UniqueName: \"kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd\") pod \"dc8808f5-c824-459a-9642-da198070084e\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.533851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts\") pod \"dc8808f5-c824-459a-9642-da198070084e\" (UID: \"dc8808f5-c824-459a-9642-da198070084e\") " Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.534573 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc8808f5-c824-459a-9642-da198070084e" (UID: "dc8808f5-c824-459a-9642-da198070084e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.552392 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd" (OuterVolumeSpecName: "kube-api-access-r9nvd") pod "dc8808f5-c824-459a-9642-da198070084e" (UID: "dc8808f5-c824-459a-9642-da198070084e"). InnerVolumeSpecName "kube-api-access-r9nvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.636390 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8808f5-c824-459a-9642-da198070084e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.636433 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9nvd\" (UniqueName: \"kubernetes.io/projected/dc8808f5-c824-459a-9642-da198070084e-kube-api-access-r9nvd\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.795576 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cdlpg" Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.799482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cdlpg" event={"ID":"dc8808f5-c824-459a-9642-da198070084e","Type":"ContainerDied","Data":"b06526ffe21db33b20ae938140ca09863e6e38f792c615376279ce5cf26e1b4a"} Feb 18 16:52:20 crc kubenswrapper[4812]: I0218 16:52:20.799516 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06526ffe21db33b20ae938140ca09863e6e38f792c615376279ce5cf26e1b4a" Feb 18 16:52:26 crc kubenswrapper[4812]: I0218 16:52:26.269177 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cdlpg"] Feb 18 16:52:26 crc kubenswrapper[4812]: I0218 16:52:26.277417 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cdlpg"] Feb 18 16:52:26 crc kubenswrapper[4812]: I0218 16:52:26.518882 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8808f5-c824-459a-9642-da198070084e" path="/var/lib/kubelet/pods/dc8808f5-c824-459a-9642-da198070084e/volumes" Feb 18 16:52:27 crc kubenswrapper[4812]: E0218 16:52:27.049495 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 16:52:27 crc kubenswrapper[4812]: E0218 16:52:27.050026 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c9h64ch99h568h5b5h64dh668h54chch67h56ch659h5ch75h575h5bdh5cbh665h94h695h68bhb9h555h665h67dh597h665h686h5d6h66ch5f8h58cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bdd340b-a57b-435b-b34b-a47c31b54c79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.257122 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.373632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.373692 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.373983 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn68m\" (UniqueName: \"kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.374090 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.374135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.374186 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data\") pod \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\" (UID: \"28f7b80c-6652-46df-b6ee-75698ae1a9b5\") " Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.382212 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m" (OuterVolumeSpecName: "kube-api-access-cn68m") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "kube-api-access-cn68m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.382430 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts" (OuterVolumeSpecName: "scripts") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.394448 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.396934 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.408188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data" (OuterVolumeSpecName: "config-data") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.410358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28f7b80c-6652-46df-b6ee-75698ae1a9b5" (UID: "28f7b80c-6652-46df-b6ee-75698ae1a9b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476167 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn68m\" (UniqueName: \"kubernetes.io/projected/28f7b80c-6652-46df-b6ee-75698ae1a9b5-kube-api-access-cn68m\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476266 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476284 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476293 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476306 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.476334 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28f7b80c-6652-46df-b6ee-75698ae1a9b5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.893190 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n5mhx" event={"ID":"28f7b80c-6652-46df-b6ee-75698ae1a9b5","Type":"ContainerDied","Data":"ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a"} Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.893547 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee70036961c2fd61ca22542f870d6d1d645c1c08a59d0f45ac454fc5beecb11a" Feb 18 16:52:27 crc kubenswrapper[4812]: I0218 16:52:27.893245 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n5mhx" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.198890 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.330936 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n5mhx"] Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.339028 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n5mhx"] Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.434892 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rxnsr"] Feb 18 16:52:28 crc kubenswrapper[4812]: E0218 16:52:28.435346 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f7b80c-6652-46df-b6ee-75698ae1a9b5" containerName="keystone-bootstrap" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.435363 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f7b80c-6652-46df-b6ee-75698ae1a9b5" containerName="keystone-bootstrap" Feb 18 16:52:28 crc kubenswrapper[4812]: E0218 16:52:28.435404 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8808f5-c824-459a-9642-da198070084e" containerName="mariadb-account-create-update" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.435412 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8808f5-c824-459a-9642-da198070084e" containerName="mariadb-account-create-update" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.435625 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8808f5-c824-459a-9642-da198070084e" containerName="mariadb-account-create-update" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.435675 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f7b80c-6652-46df-b6ee-75698ae1a9b5" containerName="keystone-bootstrap" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.436447 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.438578 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.438927 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.439076 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-828k4" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.439697 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.442814 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rxnsr"] Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.443684 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.470391 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.504241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.504792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kjj\" (UniqueName: \"kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.505002 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.505088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.505152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.505228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.519477 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f7b80c-6652-46df-b6ee-75698ae1a9b5" path="/var/lib/kubelet/pods/28f7b80c-6652-46df-b6ee-75698ae1a9b5/volumes" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607332 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57kjj\" (UniqueName: \"kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.607535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.612737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.613184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.625837 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.626058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.626193 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.631911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57kjj\" (UniqueName: \"kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj\") pod \"keystone-bootstrap-rxnsr\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:28 crc kubenswrapper[4812]: I0218 16:52:28.771459 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.273329 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s97ft"] Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.274730 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.277245 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.286384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s97ft"] Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.392925 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcc84\" (UniqueName: \"kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.393180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.496201 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcc84\" (UniqueName: \"kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.496468 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.497688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.525432 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcc84\" (UniqueName: \"kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84\") pod \"root-account-create-update-s97ft\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:31 crc kubenswrapper[4812]: I0218 16:52:31.607030 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s97ft" Feb 18 16:52:33 crc kubenswrapper[4812]: I0218 16:52:33.199390 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:33 crc kubenswrapper[4812]: I0218 16:52:33.471326 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:52:33 crc kubenswrapper[4812]: I0218 16:52:33.471593 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:52:38 crc kubenswrapper[4812]: I0218 16:52:38.199889 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:38 crc kubenswrapper[4812]: I0218 16:52:38.471980 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:52:43 crc kubenswrapper[4812]: I0218 16:52:43.201377 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:43 crc kubenswrapper[4812]: I0218 16:52:43.473274 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:52:48 crc kubenswrapper[4812]: I0218 16:52:48.202755 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:48 crc kubenswrapper[4812]: I0218 16:52:48.478261 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.716406 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.734587 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.746585 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788661 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788751 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788768 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts\") pod \"f6891569-1a5b-4739-93dd-48bfc3924518\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data\") pod \"f6891569-1a5b-4739-93dd-48bfc3924518\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.788995 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key\") pod \"f6891569-1a5b-4739-93dd-48bfc3924518\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.789037 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs\") pod \"f6891569-1a5b-4739-93dd-48bfc3924518\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.789070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.789110 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.789141 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vw98\" (UniqueName: \"kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98\") pod \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\" (UID: \"690a0e5e-06a2-4a91-99ba-bb0a405aee7c\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.789209 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2c4h\" (UniqueName: \"kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h\") pod \"f6891569-1a5b-4739-93dd-48bfc3924518\" (UID: \"f6891569-1a5b-4739-93dd-48bfc3924518\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.796330 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs" (OuterVolumeSpecName: "logs") pod "f6891569-1a5b-4739-93dd-48bfc3924518" (UID: "f6891569-1a5b-4739-93dd-48bfc3924518"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.799871 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data" (OuterVolumeSpecName: "config-data") pod "f6891569-1a5b-4739-93dd-48bfc3924518" (UID: "f6891569-1a5b-4739-93dd-48bfc3924518"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.799888 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts" (OuterVolumeSpecName: "scripts") pod "f6891569-1a5b-4739-93dd-48bfc3924518" (UID: "f6891569-1a5b-4739-93dd-48bfc3924518"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.800208 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f6891569-1a5b-4739-93dd-48bfc3924518" (UID: "f6891569-1a5b-4739-93dd-48bfc3924518"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.802118 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h" (OuterVolumeSpecName: "kube-api-access-s2c4h") pod "f6891569-1a5b-4739-93dd-48bfc3924518" (UID: "f6891569-1a5b-4739-93dd-48bfc3924518"). InnerVolumeSpecName "kube-api-access-s2c4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.807046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98" (OuterVolumeSpecName: "kube-api-access-4vw98") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "kube-api-access-4vw98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.891455 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca\") pod \"d3cbc636-65b7-4ada-8fd2-1415ece78814\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.891530 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle\") pod \"d3cbc636-65b7-4ada-8fd2-1415ece78814\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.891819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzn75\" (UniqueName: \"kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75\") pod \"d3cbc636-65b7-4ada-8fd2-1415ece78814\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.891849 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data\") pod \"d3cbc636-65b7-4ada-8fd2-1415ece78814\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.891939 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs\") pod \"d3cbc636-65b7-4ada-8fd2-1415ece78814\" (UID: \"d3cbc636-65b7-4ada-8fd2-1415ece78814\") " Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.892676 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs" (OuterVolumeSpecName: "logs") pod "d3cbc636-65b7-4ada-8fd2-1415ece78814" (UID: "d3cbc636-65b7-4ada-8fd2-1415ece78814"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.900014 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.907693 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75" (OuterVolumeSpecName: "kube-api-access-pzn75") pod "d3cbc636-65b7-4ada-8fd2-1415ece78814" (UID: "d3cbc636-65b7-4ada-8fd2-1415ece78814"). InnerVolumeSpecName "kube-api-access-pzn75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909627 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f6891569-1a5b-4739-93dd-48bfc3924518-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909671 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6891569-1a5b-4739-93dd-48bfc3924518-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909689 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vw98\" (UniqueName: \"kubernetes.io/projected/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-kube-api-access-4vw98\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909701 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2c4h\" (UniqueName: \"kubernetes.io/projected/f6891569-1a5b-4739-93dd-48bfc3924518-kube-api-access-s2c4h\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909720 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3cbc636-65b7-4ada-8fd2-1415ece78814-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.909731 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6891569-1a5b-4739-93dd-48bfc3924518-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.915059 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.920811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.939000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d3cbc636-65b7-4ada-8fd2-1415ece78814" (UID: "d3cbc636-65b7-4ada-8fd2-1415ece78814"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.939223 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.944008 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config" (OuterVolumeSpecName: "config") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.961188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "690a0e5e-06a2-4a91-99ba-bb0a405aee7c" (UID: "690a0e5e-06a2-4a91-99ba-bb0a405aee7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.962234 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3cbc636-65b7-4ada-8fd2-1415ece78814" (UID: "d3cbc636-65b7-4ada-8fd2-1415ece78814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:50 crc kubenswrapper[4812]: I0218 16:52:50.986246 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data" (OuterVolumeSpecName: "config-data") pod "d3cbc636-65b7-4ada-8fd2-1415ece78814" (UID: "d3cbc636-65b7-4ada-8fd2-1415ece78814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011747 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011835 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011850 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzn75\" (UniqueName: \"kubernetes.io/projected/d3cbc636-65b7-4ada-8fd2-1415ece78814-kube-api-access-pzn75\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011864 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011876 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011888 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011903 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/690a0e5e-06a2-4a91-99ba-bb0a405aee7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011915 4812 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.011929 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cbc636-65b7-4ada-8fd2-1415ece78814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.093589 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f74fd4697-6xbvv" event={"ID":"f6891569-1a5b-4739-93dd-48bfc3924518","Type":"ContainerDied","Data":"1feeabe5c8fdf35b47dcb042031d969d7b3f6f15bd76dd53b6d834200449efcc"} Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.093602 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f74fd4697-6xbvv" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.096956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d3cbc636-65b7-4ada-8fd2-1415ece78814","Type":"ContainerDied","Data":"372bae168d1ea9394e00e3d7eecd74dec25775d46e4b8fb6e33110ee61e386a2"} Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.097016 4812 scope.go:117] "RemoveContainer" containerID="8d97e9ec95264e41fcec8f23446ea57f017e2aee1499af9001ca28436aa22f70" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.097128 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.101511 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" event={"ID":"690a0e5e-06a2-4a91-99ba-bb0a405aee7c","Type":"ContainerDied","Data":"cf2934c104500bb8d54ea565a222b545e6944c7ab3e4ec9862a992218c2637fd"} Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.101550 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.163594 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.195753 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-zpjfn"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.208592 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.218502 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.239509 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:51 crc kubenswrapper[4812]: E0218 16:52:51.244549 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="init" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.244689 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="init" Feb 18 16:52:51 crc kubenswrapper[4812]: E0218 16:52:51.244711 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.244721 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" Feb 18 16:52:51 crc kubenswrapper[4812]: E0218 16:52:51.244735 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.244742 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" Feb 18 16:52:51 crc kubenswrapper[4812]: E0218 16:52:51.244782 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api-log" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.244789 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api-log" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.247425 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.247491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.247518 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api-log" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.251468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.254822 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.283448 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.291588 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f74fd4697-6xbvv"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.301067 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.317673 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.317797 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhrc\" (UniqueName: \"kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.318168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.318223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.318344 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.420672 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.420748 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.420809 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.420864 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.420912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvhrc\" (UniqueName: \"kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.421446 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.424908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.435440 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.435721 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.437894 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvhrc\" (UniqueName: \"kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc\") pod \"watcher-api-0\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " pod="openstack/watcher-api-0" Feb 18 16:52:51 crc kubenswrapper[4812]: I0218 16:52:51.581946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:52:52 crc kubenswrapper[4812]: I0218 16:52:52.520343 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" path="/var/lib/kubelet/pods/690a0e5e-06a2-4a91-99ba-bb0a405aee7c/volumes" Feb 18 16:52:52 crc kubenswrapper[4812]: I0218 16:52:52.521937 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" path="/var/lib/kubelet/pods/d3cbc636-65b7-4ada-8fd2-1415ece78814/volumes" Feb 18 16:52:52 crc kubenswrapper[4812]: I0218 16:52:52.522895 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6891569-1a5b-4739-93dd-48bfc3924518" path="/var/lib/kubelet/pods/f6891569-1a5b-4739-93dd-48bfc3924518/volumes" Feb 18 16:52:53 crc kubenswrapper[4812]: I0218 16:52:53.204252 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d3cbc636-65b7-4ada-8fd2-1415ece78814" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:52:53 crc kubenswrapper[4812]: I0218 16:52:53.479310 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-zpjfn" podUID="690a0e5e-06a2-4a91-99ba-bb0a405aee7c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 18 16:53:08 crc kubenswrapper[4812]: E0218 16:53:08.103848 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 18 16:53:08 crc kubenswrapper[4812]: E0218 16:53:08.104583 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p84c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7tnx6_openstack(09eb0e05-320a-463b-85cd-e1e387bb2610): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:53:08 crc kubenswrapper[4812]: E0218 16:53:08.105767 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7tnx6" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" Feb 18 16:53:08 crc kubenswrapper[4812]: E0218 16:53:08.262552 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-7tnx6" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" Feb 18 16:53:09 crc kubenswrapper[4812]: E0218 16:53:09.472739 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 16:53:09 crc kubenswrapper[4812]: E0218 16:53:09.472961 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nkqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-284h9_openstack(4b87b144-e1c5-4d51-b6f1-6896913188d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:53:09 crc kubenswrapper[4812]: E0218 16:53:09.474359 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-284h9" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.280553 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-284h9" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.383287 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.383776 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-tzm7v_openstack(c32f52a7-3dab-42c3-b32d-ae230861ae69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.385481 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-tzm7v" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.666530 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Feb 18 16:53:10 crc kubenswrapper[4812]: E0218 16:53:10.667176 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c9h64ch99h568h5b5h64dh668h54chch67h56ch659h5ch75h575h5bdh5cbh665h94h695h68bhb9h555h665h67dh597h665h686h5d6h66ch5f8h58cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bdd340b-a57b-435b-b34b-a47c31b54c79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:53:10 crc kubenswrapper[4812]: I0218 16:53:10.694274 4812 scope.go:117] "RemoveContainer" containerID="95270089151b1b7ad156d75225071044ef1719a15343d29668f8669d85767d81" Feb 18 16:53:10 crc kubenswrapper[4812]: I0218 16:53:10.753757 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dbd455b84-x6fxk"] Feb 18 16:53:10 crc kubenswrapper[4812]: W0218 16:53:10.799791 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11a958b4_3c26_4d73_acfa_fb3fb4c08cb2.slice/crio-c98951d7def916e072acb28d33b71718bb8b4a933d722e650770845f828ed97d WatchSource:0}: Error finding container c98951d7def916e072acb28d33b71718bb8b4a933d722e650770845f828ed97d: Status 404 returned error can't find the container with id c98951d7def916e072acb28d33b71718bb8b4a933d722e650770845f828ed97d Feb 18 16:53:10 crc kubenswrapper[4812]: I0218 16:53:10.952828 4812 scope.go:117] "RemoveContainer" containerID="996f000d015708c14e9288c83a8d617b27e91ff5d84bf007cca69345ef95ca02" Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.028499 4812 scope.go:117] "RemoveContainer" containerID="e7f4fb81610459e3f8f0e3841b8755f453b35fe340b6ea9342034993a2d343e6" Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.164576 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:53:11 crc kubenswrapper[4812]: W0218 16:53:11.174773 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf905c17d_31e8_4e36_a13a_ccc837408c9f.slice/crio-ee169314aac798664ff1e121bb0f86e438b33a9ff70623371179b99e61e05f05 WatchSource:0}: Error finding container ee169314aac798664ff1e121bb0f86e438b33a9ff70623371179b99e61e05f05: Status 404 returned error can't find the container with id ee169314aac798664ff1e121bb0f86e438b33a9ff70623371179b99e61e05f05 Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.277991 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rxnsr"] Feb 18 16:53:11 crc kubenswrapper[4812]: W0218 16:53:11.283394 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61496010_8bfd_4169_b604_2d595bfc2bf1.slice/crio-ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30 WatchSource:0}: Error finding container ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30: Status 404 returned error can't find the container with id ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30 Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.289825 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.291749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerStarted","Data":"ee169314aac798664ff1e121bb0f86e438b33a9ff70623371179b99e61e05f05"} Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.293869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dbd455b84-x6fxk" event={"ID":"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2","Type":"ContainerStarted","Data":"c98951d7def916e072acb28d33b71718bb8b4a933d722e650770845f828ed97d"} Feb 18 16:53:11 crc kubenswrapper[4812]: E0218 16:53:11.296359 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-tzm7v" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.334071 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s97ft"] Feb 18 16:53:11 crc kubenswrapper[4812]: W0218 16:53:11.340130 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7842051b_f5fe_4dd4_8f0d_3c5850dbf55e.slice/crio-fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d WatchSource:0}: Error finding container fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d: Status 404 returned error can't find the container with id fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.346172 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 16:53:11 crc kubenswrapper[4812]: I0218 16:53:11.444408 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.305701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerStarted","Data":"8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.306201 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerStarted","Data":"3e47b6b85536793cb8c2784ed56ecb52cb5f702a16cfb2021ff30d6d4b28063e"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.305782 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ff894c77-sfq6d" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon-log" containerID="cri-o://3e47b6b85536793cb8c2784ed56ecb52cb5f702a16cfb2021ff30d6d4b28063e" gracePeriod=30 Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.305846 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ff894c77-sfq6d" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon" containerID="cri-o://8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0" gracePeriod=30 Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.311152 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a2c86f83-422d-46f2-942d-608f3afacaa0","Type":"ContainerStarted","Data":"9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.316976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dbd455b84-x6fxk" event={"ID":"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2","Type":"ContainerStarted","Data":"02d536988c0fbe4e2fcb4e41398938c94741262bff09c439b4886dde9223a6ae"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.317154 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dbd455b84-x6fxk" event={"ID":"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2","Type":"ContainerStarted","Data":"79c85b03fa96d523726ef02e5fd3cfcd98b08dceb7a8767941f8064a88e87038"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.322768 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerStarted","Data":"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.322850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerStarted","Data":"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.325901 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerStarted","Data":"e88efaab20a3aa4c8f9bfe1dac9d4893fa4fccdd15eefe6832f81f2522a9c59b"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.325944 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerStarted","Data":"fc14977535bb5014d59653bc04ce7cc46f15633a98631f93b9e8e4456ec194e7"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.337923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s97ft" event={"ID":"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e","Type":"ContainerStarted","Data":"1c05da71c3a9ba4947cfb1f95772b8b2bd3749c3d56ce7904e354c6b21450196"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.337974 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s97ft" event={"ID":"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e","Type":"ContainerStarted","Data":"fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.338446 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55ff894c77-sfq6d" podStartSLOduration=2.965004151 podStartE2EDuration="1m7.338427653s" podCreationTimestamp="2026-02-18 16:52:05 +0000 UTC" firstStartedPulling="2026-02-18 16:52:06.320857415 +0000 UTC m=+1346.586468324" lastFinishedPulling="2026-02-18 16:53:10.694280917 +0000 UTC m=+1410.959891826" observedRunningTime="2026-02-18 16:53:12.334932408 +0000 UTC m=+1412.600543317" watchObservedRunningTime="2026-02-18 16:53:12.338427653 +0000 UTC m=+1412.604038562" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.339653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"19d13bf1-b3dc-405a-9240-6133d293f08a","Type":"ContainerStarted","Data":"069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.343278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rxnsr" event={"ID":"61496010-8bfd-4169-b604-2d595bfc2bf1","Type":"ContainerStarted","Data":"cd4b260a85c7f7fe7ef10a6b462201c8378f854c1ea2d2a540cc0911ec87a9a3"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.343315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rxnsr" event={"ID":"61496010-8bfd-4169-b604-2d595bfc2bf1","Type":"ContainerStarted","Data":"ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.348272 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerStarted","Data":"bf48c949dabff07439306379dc67c2e9021f11ce35f6a0ac416d931ee3239a2e"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.348316 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerStarted","Data":"0891e31806567fdb7257011d32f9d82c4a29923428d62a78beb989188d1db2a5"} Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.348495 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b446b4fcc-sxzl8" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon-log" containerID="cri-o://0891e31806567fdb7257011d32f9d82c4a29923428d62a78beb989188d1db2a5" gracePeriod=30 Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.348521 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b446b4fcc-sxzl8" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon" containerID="cri-o://bf48c949dabff07439306379dc67c2e9021f11ce35f6a0ac416d931ee3239a2e" gracePeriod=30 Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.364858 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-dbd455b84-x6fxk" podStartSLOduration=59.36482985 podStartE2EDuration="59.36482985s" podCreationTimestamp="2026-02-18 16:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:12.359697044 +0000 UTC m=+1412.625307963" watchObservedRunningTime="2026-02-18 16:53:12.36482985 +0000 UTC m=+1412.630440759" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.382024 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=25.801407489 podStartE2EDuration="1m10.38200502s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="2026-02-18 16:52:03.8005845 +0000 UTC m=+1344.066195409" lastFinishedPulling="2026-02-18 16:52:48.381182021 +0000 UTC m=+1388.646792940" observedRunningTime="2026-02-18 16:53:12.379792246 +0000 UTC m=+1412.645403155" watchObservedRunningTime="2026-02-18 16:53:12.38200502 +0000 UTC m=+1412.647615929" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.448395 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-544c585488-4dbfm" podStartSLOduration=60.448375184 podStartE2EDuration="1m0.448375184s" podCreationTimestamp="2026-02-18 16:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:12.419827756 +0000 UTC m=+1412.685438665" watchObservedRunningTime="2026-02-18 16:53:12.448375184 +0000 UTC m=+1412.713986093" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.451216 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-s97ft" podStartSLOduration=41.451208314 podStartE2EDuration="41.451208314s" podCreationTimestamp="2026-02-18 16:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:12.436515934 +0000 UTC m=+1412.702126843" watchObservedRunningTime="2026-02-18 16:53:12.451208314 +0000 UTC m=+1412.716819223" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.489801 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=24.024016942 podStartE2EDuration="1m10.489772457s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="2026-02-18 16:52:04.120154272 +0000 UTC m=+1344.385765181" lastFinishedPulling="2026-02-18 16:52:50.585909747 +0000 UTC m=+1390.851520696" observedRunningTime="2026-02-18 16:53:12.482040588 +0000 UTC m=+1412.747651517" watchObservedRunningTime="2026-02-18 16:53:12.489772457 +0000 UTC m=+1412.755383366" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.522304 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rxnsr" podStartSLOduration=44.522265153 podStartE2EDuration="44.522265153s" podCreationTimestamp="2026-02-18 16:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:12.505586284 +0000 UTC m=+1412.771197213" watchObservedRunningTime="2026-02-18 16:53:12.522265153 +0000 UTC m=+1412.787876062" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.542215 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b446b4fcc-sxzl8" podStartSLOduration=5.180775914 podStartE2EDuration="1m10.54219183s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="2026-02-18 16:52:04.83679965 +0000 UTC m=+1345.102410559" lastFinishedPulling="2026-02-18 16:53:10.198215566 +0000 UTC m=+1410.463826475" observedRunningTime="2026-02-18 16:53:12.532545144 +0000 UTC m=+1412.798156053" watchObservedRunningTime="2026-02-18 16:53:12.54219183 +0000 UTC m=+1412.807802739" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.825335 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:12 crc kubenswrapper[4812]: I0218 16:53:12.971805 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.236507 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.236554 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.268052 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.302194 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.310343 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.310396 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.361287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerStarted","Data":"71ad038a4f91f537422257d749a2e70ab9c5083af2f20659db7af3537850d266"} Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.361959 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.444352 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.514496 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.586741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.627990 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.730861 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:53:13 crc kubenswrapper[4812]: I0218 16:53:13.730955 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:53:14 crc kubenswrapper[4812]: I0218 16:53:14.454650 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=23.454629374 podStartE2EDuration="23.454629374s" podCreationTimestamp="2026-02-18 16:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:14.451438066 +0000 UTC m=+1414.717048995" watchObservedRunningTime="2026-02-18 16:53:14.454629374 +0000 UTC m=+1414.720240283" Feb 18 16:53:15 crc kubenswrapper[4812]: I0218 16:53:15.434821 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerName="watcher-decision-engine" containerID="cri-o://9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" gracePeriod=30 Feb 18 16:53:15 crc kubenswrapper[4812]: I0218 16:53:15.437861 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" containerID="cri-o://069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" gracePeriod=30 Feb 18 16:53:15 crc kubenswrapper[4812]: I0218 16:53:15.676996 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:53:16 crc kubenswrapper[4812]: I0218 16:53:16.582693 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:53:16 crc kubenswrapper[4812]: I0218 16:53:16.582771 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:53:17 crc kubenswrapper[4812]: I0218 16:53:17.458646 4812 generic.go:334] "Generic (PLEG): container finished" podID="7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" containerID="1c05da71c3a9ba4947cfb1f95772b8b2bd3749c3d56ce7904e354c6b21450196" exitCode=0 Feb 18 16:53:17 crc kubenswrapper[4812]: I0218 16:53:17.458844 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s97ft" event={"ID":"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e","Type":"ContainerDied","Data":"1c05da71c3a9ba4947cfb1f95772b8b2bd3749c3d56ce7904e354c6b21450196"} Feb 18 16:53:18 crc kubenswrapper[4812]: E0218 16:53:18.243037 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:18 crc kubenswrapper[4812]: E0218 16:53:18.244917 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:18 crc kubenswrapper[4812]: E0218 16:53:18.246769 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:18 crc kubenswrapper[4812]: E0218 16:53:18.246814 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:19 crc kubenswrapper[4812]: I0218 16:53:19.946353 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.507168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s97ft" event={"ID":"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e","Type":"ContainerDied","Data":"fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d"} Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.507713 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcce7708ffd395842759e212ca9cfb6551c3b1f57890a189fb9943f8fd60f37d" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.578264 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s97ft" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.583368 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.598946 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.690842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcc84\" (UniqueName: \"kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84\") pod \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.691043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts\") pod \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\" (UID: \"7842051b-f5fe-4dd4-8f0d-3c5850dbf55e\") " Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.693650 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" (UID: "7842051b-f5fe-4dd4-8f0d-3c5850dbf55e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.707381 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84" (OuterVolumeSpecName: "kube-api-access-kcc84") pod "7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" (UID: "7842051b-f5fe-4dd4-8f0d-3c5850dbf55e"). InnerVolumeSpecName "kube-api-access-kcc84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.793411 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:21 crc kubenswrapper[4812]: I0218 16:53:21.793653 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcc84\" (UniqueName: \"kubernetes.io/projected/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e-kube-api-access-kcc84\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:22 crc kubenswrapper[4812]: I0218 16:53:22.513954 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s97ft" Feb 18 16:53:22 crc kubenswrapper[4812]: I0218 16:53:22.521921 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 16:53:23 crc kubenswrapper[4812]: E0218 16:53:23.242537 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:23 crc kubenswrapper[4812]: E0218 16:53:23.244478 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:23 crc kubenswrapper[4812]: E0218 16:53:23.245369 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:23 crc kubenswrapper[4812]: E0218 16:53:23.245490 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:23 crc kubenswrapper[4812]: I0218 16:53:23.311974 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 16:53:23 crc kubenswrapper[4812]: I0218 16:53:23.732264 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-dbd455b84-x6fxk" podUID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 16:53:25 crc kubenswrapper[4812]: I0218 16:53:25.601843 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:25 crc kubenswrapper[4812]: I0218 16:53:25.602832 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" containerID="cri-o://e88efaab20a3aa4c8f9bfe1dac9d4893fa4fccdd15eefe6832f81f2522a9c59b" gracePeriod=30 Feb 18 16:53:25 crc kubenswrapper[4812]: I0218 16:53:25.602871 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" containerID="cri-o://71ad038a4f91f537422257d749a2e70ab9c5083af2f20659db7af3537850d266" gracePeriod=30 Feb 18 16:53:26 crc kubenswrapper[4812]: I0218 16:53:26.548468 4812 generic.go:334] "Generic (PLEG): container finished" podID="6e824742-a061-4178-8a1b-a016205b88d8" containerID="e88efaab20a3aa4c8f9bfe1dac9d4893fa4fccdd15eefe6832f81f2522a9c59b" exitCode=143 Feb 18 16:53:26 crc kubenswrapper[4812]: I0218 16:53:26.548532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerDied","Data":"e88efaab20a3aa4c8f9bfe1dac9d4893fa4fccdd15eefe6832f81f2522a9c59b"} Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.351573 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:53:27 crc kubenswrapper[4812]: E0218 16:53:27.352535 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" containerName="mariadb-account-create-update" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.352554 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" containerName="mariadb-account-create-update" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.352824 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" containerName="mariadb-account-create-update" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.354764 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.367692 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.405944 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.406269 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgq4\" (UniqueName: \"kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.406389 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.509649 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.510085 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgq4\" (UniqueName: \"kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.510341 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.510484 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.510754 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.559187 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgq4\" (UniqueName: \"kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4\") pod \"community-operators-768tv\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:27 crc kubenswrapper[4812]: I0218 16:53:27.676553 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:53:28 crc kubenswrapper[4812]: E0218 16:53:28.244647 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:28 crc kubenswrapper[4812]: E0218 16:53:28.246945 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:28 crc kubenswrapper[4812]: E0218 16:53:28.248751 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:28 crc kubenswrapper[4812]: E0218 16:53:28.248785 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:28 crc kubenswrapper[4812]: I0218 16:53:28.299249 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:53:28 crc kubenswrapper[4812]: I0218 16:53:28.750229 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:40932->10.217.0.167:9322: read: connection reset by peer" Feb 18 16:53:28 crc kubenswrapper[4812]: I0218 16:53:28.750741 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": read tcp 10.217.0.2:40930->10.217.0.167:9322: read: connection reset by peer" Feb 18 16:53:29 crc kubenswrapper[4812]: I0218 16:53:29.582971 4812 generic.go:334] "Generic (PLEG): container finished" podID="6e824742-a061-4178-8a1b-a016205b88d8" containerID="71ad038a4f91f537422257d749a2e70ab9c5083af2f20659db7af3537850d266" exitCode=0 Feb 18 16:53:29 crc kubenswrapper[4812]: I0218 16:53:29.583022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerDied","Data":"71ad038a4f91f537422257d749a2e70ab9c5083af2f20659db7af3537850d266"} Feb 18 16:53:31 crc kubenswrapper[4812]: I0218 16:53:31.583403 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 16:53:31 crc kubenswrapper[4812]: I0218 16:53:31.583495 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 16:53:32 crc kubenswrapper[4812]: E0218 16:53:32.827737 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 16:53:32 crc kubenswrapper[4812]: E0218 16:53:32.829772 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 16:53:32 crc kubenswrapper[4812]: E0218 16:53:32.831363 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 16:53:32 crc kubenswrapper[4812]: E0218 16:53:32.831436 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerName="watcher-decision-engine" Feb 18 16:53:33 crc kubenswrapper[4812]: E0218 16:53:33.238692 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:33 crc kubenswrapper[4812]: E0218 16:53:33.240739 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:33 crc kubenswrapper[4812]: E0218 16:53:33.242397 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:33 crc kubenswrapper[4812]: E0218 16:53:33.242477 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:33 crc kubenswrapper[4812]: I0218 16:53:33.310943 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 16:53:33 crc kubenswrapper[4812]: I0218 16:53:33.730670 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-dbd455b84-x6fxk" podUID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.649168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerStarted","Data":"bc7b05d636778dbe485e7cb83be92c7036e298eea0513ba43bd9e86fda11efab"} Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.649773 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerStarted","Data":"659c0b786ce9a59ecf1c75e6cb1da982a2c1770d0d0c10964a2ffcf539a8c0fd"} Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.653364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnx6" event={"ID":"09eb0e05-320a-463b-85cd-e1e387bb2610","Type":"ContainerStarted","Data":"974ec9d8d111fdb8c79fa08ce8f123e8211389121a03705501c003f02cab124e"} Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.704137 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7tnx6" podStartSLOduration=3.357682212 podStartE2EDuration="1m32.704094094s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="2026-02-18 16:52:04.884455531 +0000 UTC m=+1345.150066430" lastFinishedPulling="2026-02-18 16:53:34.230867383 +0000 UTC m=+1434.496478312" observedRunningTime="2026-02-18 16:53:34.699676806 +0000 UTC m=+1434.965287715" watchObservedRunningTime="2026-02-18 16:53:34.704094094 +0000 UTC m=+1434.969705003" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.714325 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.766544 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca\") pod \"6e824742-a061-4178-8a1b-a016205b88d8\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.766601 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle\") pod \"6e824742-a061-4178-8a1b-a016205b88d8\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.766692 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs\") pod \"6e824742-a061-4178-8a1b-a016205b88d8\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.766795 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data\") pod \"6e824742-a061-4178-8a1b-a016205b88d8\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.766903 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvhrc\" (UniqueName: \"kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc\") pod \"6e824742-a061-4178-8a1b-a016205b88d8\" (UID: \"6e824742-a061-4178-8a1b-a016205b88d8\") " Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.767908 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs" (OuterVolumeSpecName: "logs") pod "6e824742-a061-4178-8a1b-a016205b88d8" (UID: "6e824742-a061-4178-8a1b-a016205b88d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.829845 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6e824742-a061-4178-8a1b-a016205b88d8" (UID: "6e824742-a061-4178-8a1b-a016205b88d8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.832321 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc" (OuterVolumeSpecName: "kube-api-access-vvhrc") pod "6e824742-a061-4178-8a1b-a016205b88d8" (UID: "6e824742-a061-4178-8a1b-a016205b88d8"). InnerVolumeSpecName "kube-api-access-vvhrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.870025 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvhrc\" (UniqueName: \"kubernetes.io/projected/6e824742-a061-4178-8a1b-a016205b88d8-kube-api-access-vvhrc\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.870055 4812 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.870066 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e824742-a061-4178-8a1b-a016205b88d8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.884241 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e824742-a061-4178-8a1b-a016205b88d8" (UID: "6e824742-a061-4178-8a1b-a016205b88d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.904767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data" (OuterVolumeSpecName: "config-data") pod "6e824742-a061-4178-8a1b-a016205b88d8" (UID: "6e824742-a061-4178-8a1b-a016205b88d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.972484 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:34 crc kubenswrapper[4812]: I0218 16:53:34.972558 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e824742-a061-4178-8a1b-a016205b88d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.669590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bdd340b-a57b-435b-b34b-a47c31b54c79","Type":"ContainerStarted","Data":"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545"} Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.674987 4812 generic.go:334] "Generic (PLEG): container finished" podID="61496010-8bfd-4169-b604-2d595bfc2bf1" containerID="cd4b260a85c7f7fe7ef10a6b462201c8378f854c1ea2d2a540cc0911ec87a9a3" exitCode=0 Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.675063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rxnsr" event={"ID":"61496010-8bfd-4169-b604-2d595bfc2bf1","Type":"ContainerDied","Data":"cd4b260a85c7f7fe7ef10a6b462201c8378f854c1ea2d2a540cc0911ec87a9a3"} Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.684612 4812 generic.go:334] "Generic (PLEG): container finished" podID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerID="bc7b05d636778dbe485e7cb83be92c7036e298eea0513ba43bd9e86fda11efab" exitCode=0 Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.684693 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerDied","Data":"bc7b05d636778dbe485e7cb83be92c7036e298eea0513ba43bd9e86fda11efab"} Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.688174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6e824742-a061-4178-8a1b-a016205b88d8","Type":"ContainerDied","Data":"fc14977535bb5014d59653bc04ce7cc46f15633a98631f93b9e8e4456ec194e7"} Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.688242 4812 scope.go:117] "RemoveContainer" containerID="71ad038a4f91f537422257d749a2e70ab9c5083af2f20659db7af3537850d266" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.688405 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.720723 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tzm7v" event={"ID":"c32f52a7-3dab-42c3-b32d-ae230861ae69","Type":"ContainerStarted","Data":"c7509139e6fa8fa08532972bc94488999ef707df85725a7d81784410cded7af9"} Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.764385 4812 scope.go:117] "RemoveContainer" containerID="e88efaab20a3aa4c8f9bfe1dac9d4893fa4fccdd15eefe6832f81f2522a9c59b" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.770426 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.795721 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.809357 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tzm7v" podStartSLOduration=2.395000112 podStartE2EDuration="1m32.809336823s" podCreationTimestamp="2026-02-18 16:52:03 +0000 UTC" firstStartedPulling="2026-02-18 16:52:04.866729094 +0000 UTC m=+1345.132340003" lastFinishedPulling="2026-02-18 16:53:35.281065805 +0000 UTC m=+1435.546676714" observedRunningTime="2026-02-18 16:53:35.78059274 +0000 UTC m=+1436.046203649" watchObservedRunningTime="2026-02-18 16:53:35.809336823 +0000 UTC m=+1436.074947732" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.841170 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:35 crc kubenswrapper[4812]: E0218 16:53:35.841811 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.841827 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" Feb 18 16:53:35 crc kubenswrapper[4812]: E0218 16:53:35.841844 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.841850 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.842037 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api-log" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.842054 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e824742-a061-4178-8a1b-a016205b88d8" containerName="watcher-api" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.843018 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.847072 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.852573 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.853756 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.857009 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906555 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-config-data\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906804 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-logs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906837 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6jt\" (UniqueName: \"kubernetes.io/projected/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-kube-api-access-qw6jt\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:35 crc kubenswrapper[4812]: I0218 16:53:35.906927 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008231 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008318 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-config-data\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008433 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-logs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.008482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw6jt\" (UniqueName: \"kubernetes.io/projected/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-kube-api-access-qw6jt\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.009627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-logs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.014766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.014779 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.018535 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.022510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.023564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-config-data\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.034664 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw6jt\" (UniqueName: \"kubernetes.io/projected/9512a70d-2793-4aac-bccc-4ed1d50aeb5b-kube-api-access-qw6jt\") pod \"watcher-api-0\" (UID: \"9512a70d-2793-4aac-bccc-4ed1d50aeb5b\") " pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.170891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.540838 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e824742-a061-4178-8a1b-a016205b88d8" path="/var/lib/kubelet/pods/6e824742-a061-4178-8a1b-a016205b88d8/volumes" Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.661277 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 16:53:36 crc kubenswrapper[4812]: W0218 16:53:36.665354 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9512a70d_2793_4aac_bccc_4ed1d50aeb5b.slice/crio-576aa8534cb38fe0720fefd012047609af47328671bd711d311017cf6b935945 WatchSource:0}: Error finding container 576aa8534cb38fe0720fefd012047609af47328671bd711d311017cf6b935945: Status 404 returned error can't find the container with id 576aa8534cb38fe0720fefd012047609af47328671bd711d311017cf6b935945 Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.748213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-284h9" event={"ID":"4b87b144-e1c5-4d51-b6f1-6896913188d1","Type":"ContainerStarted","Data":"f16c6f19ff0a675905f40cc8ed610c1f2889f24418838793f7e2cb03a98ddf0f"} Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.756591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9512a70d-2793-4aac-bccc-4ed1d50aeb5b","Type":"ContainerStarted","Data":"576aa8534cb38fe0720fefd012047609af47328671bd711d311017cf6b935945"} Feb 18 16:53:36 crc kubenswrapper[4812]: I0218 16:53:36.793261 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-284h9" podStartSLOduration=4.360362787 podStartE2EDuration="1m34.793222092s" podCreationTimestamp="2026-02-18 16:52:02 +0000 UTC" firstStartedPulling="2026-02-18 16:52:04.852577867 +0000 UTC m=+1345.118188776" lastFinishedPulling="2026-02-18 16:53:35.285437172 +0000 UTC m=+1435.551048081" observedRunningTime="2026-02-18 16:53:36.781429194 +0000 UTC m=+1437.047040103" watchObservedRunningTime="2026-02-18 16:53:36.793222092 +0000 UTC m=+1437.058833001" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.281749 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.346937 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.347117 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.347147 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.347180 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.347281 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.347346 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57kjj\" (UniqueName: \"kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj\") pod \"61496010-8bfd-4169-b604-2d595bfc2bf1\" (UID: \"61496010-8bfd-4169-b604-2d595bfc2bf1\") " Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.353623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts" (OuterVolumeSpecName: "scripts") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.357249 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.357289 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.376126 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj" (OuterVolumeSpecName: "kube-api-access-57kjj") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "kube-api-access-57kjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.381457 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data" (OuterVolumeSpecName: "config-data") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.382245 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61496010-8bfd-4169-b604-2d595bfc2bf1" (UID: "61496010-8bfd-4169-b604-2d595bfc2bf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449283 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57kjj\" (UniqueName: \"kubernetes.io/projected/61496010-8bfd-4169-b604-2d595bfc2bf1-kube-api-access-57kjj\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449337 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449350 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449361 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449372 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.449382 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61496010-8bfd-4169-b604-2d595bfc2bf1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.772342 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rxnsr" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.772336 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rxnsr" event={"ID":"61496010-8bfd-4169-b604-2d595bfc2bf1","Type":"ContainerDied","Data":"ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30"} Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.772741 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffafd561780bda597ed9b9f23c05793c1bd18dcf1cb7a4b92b9752fd8b79bc30" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.781374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9512a70d-2793-4aac-bccc-4ed1d50aeb5b","Type":"ContainerStarted","Data":"94040776e8835c2cc7dcd26b4bf9c66e62571b328f3747df82e554e0b99dd986"} Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.816793 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65547bbfff-9ppm5"] Feb 18 16:53:37 crc kubenswrapper[4812]: E0218 16:53:37.817348 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61496010-8bfd-4169-b604-2d595bfc2bf1" containerName="keystone-bootstrap" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.817376 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61496010-8bfd-4169-b604-2d595bfc2bf1" containerName="keystone-bootstrap" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.817663 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="61496010-8bfd-4169-b604-2d595bfc2bf1" containerName="keystone-bootstrap" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.818585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.823575 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.823871 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.824000 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.824194 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.824358 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.824673 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-828k4" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.842130 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65547bbfff-9ppm5"] Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-fernet-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868417 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-credential-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868500 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-config-data\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-scripts\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6n6\" (UniqueName: \"kubernetes.io/projected/fa418512-c79e-452a-9791-67dfe6c3d772-kube-api-access-ms6n6\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868616 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-combined-ca-bundle\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868700 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-internal-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.868720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-public-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.971681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-config-data\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.971762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-scripts\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.971804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6n6\" (UniqueName: \"kubernetes.io/projected/fa418512-c79e-452a-9791-67dfe6c3d772-kube-api-access-ms6n6\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.971877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-combined-ca-bundle\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.971922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-internal-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.972683 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-public-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.972781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-fernet-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.972901 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-credential-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.978012 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-credential-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.982077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-fernet-keys\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.986284 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-config-data\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.986434 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-internal-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.987347 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-public-tls-certs\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.988069 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-combined-ca-bundle\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:37 crc kubenswrapper[4812]: I0218 16:53:37.995820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6n6\" (UniqueName: \"kubernetes.io/projected/fa418512-c79e-452a-9791-67dfe6c3d772-kube-api-access-ms6n6\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:38 crc kubenswrapper[4812]: I0218 16:53:38.004734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa418512-c79e-452a-9791-67dfe6c3d772-scripts\") pod \"keystone-65547bbfff-9ppm5\" (UID: \"fa418512-c79e-452a-9791-67dfe6c3d772\") " pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:38 crc kubenswrapper[4812]: I0218 16:53:38.137620 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:38 crc kubenswrapper[4812]: E0218 16:53:38.240422 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:38 crc kubenswrapper[4812]: E0218 16:53:38.242720 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:38 crc kubenswrapper[4812]: E0218 16:53:38.244440 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:38 crc kubenswrapper[4812]: E0218 16:53:38.244490 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.472619 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65547bbfff-9ppm5"] Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.816343 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65547bbfff-9ppm5" event={"ID":"fa418512-c79e-452a-9791-67dfe6c3d772","Type":"ContainerStarted","Data":"aa069cc469f9e2243e3e6cf8e6546164cd3febc6639a6cce62f697384abb81aa"} Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.830543 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerStarted","Data":"075caec4b419f1248e83076127514ef25ba794f4106521e911c6b23a1c35b3ad"} Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.854347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9512a70d-2793-4aac-bccc-4ed1d50aeb5b","Type":"ContainerStarted","Data":"4cbb9804cf99f7c16fa33f6030213b25af8acd19214e59e41931820c8f6f4eb2"} Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.854705 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.857187 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9512a70d-2793-4aac-bccc-4ed1d50aeb5b" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": dial tcp 10.217.0.169:9322: connect: connection refused" Feb 18 16:53:39 crc kubenswrapper[4812]: I0218 16:53:39.917306 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.917284418 podStartE2EDuration="4.917284418s" podCreationTimestamp="2026-02-18 16:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:39.908739999 +0000 UTC m=+1440.174350908" watchObservedRunningTime="2026-02-18 16:53:39.917284418 +0000 UTC m=+1440.182895337" Feb 18 16:53:40 crc kubenswrapper[4812]: I0218 16:53:40.873130 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65547bbfff-9ppm5" event={"ID":"fa418512-c79e-452a-9791-67dfe6c3d772","Type":"ContainerStarted","Data":"c473a12182cb5293d9b4932c40eb13d0a2075ad307b81e9d8a4201553345afcc"} Feb 18 16:53:40 crc kubenswrapper[4812]: I0218 16:53:40.875630 4812 generic.go:334] "Generic (PLEG): container finished" podID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerID="075caec4b419f1248e83076127514ef25ba794f4106521e911c6b23a1c35b3ad" exitCode=0 Feb 18 16:53:40 crc kubenswrapper[4812]: I0218 16:53:40.876296 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerDied","Data":"075caec4b419f1248e83076127514ef25ba794f4106521e911c6b23a1c35b3ad"} Feb 18 16:53:41 crc kubenswrapper[4812]: I0218 16:53:41.171678 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 16:53:41 crc kubenswrapper[4812]: I0218 16:53:41.886619 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:53:41 crc kubenswrapper[4812]: I0218 16:53:41.906853 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65547bbfff-9ppm5" podStartSLOduration=4.906826778 podStartE2EDuration="4.906826778s" podCreationTimestamp="2026-02-18 16:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:53:41.904621534 +0000 UTC m=+1442.170232453" watchObservedRunningTime="2026-02-18 16:53:41.906826778 +0000 UTC m=+1442.172437687" Feb 18 16:53:42 crc kubenswrapper[4812]: I0218 16:53:42.896483 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8882306-a365-4ee4-adf2-e672b20ad942" containerID="0891e31806567fdb7257011d32f9d82c4a29923428d62a78beb989188d1db2a5" exitCode=137 Feb 18 16:53:42 crc kubenswrapper[4812]: I0218 16:53:42.896788 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerDied","Data":"0891e31806567fdb7257011d32f9d82c4a29923428d62a78beb989188d1db2a5"} Feb 18 16:53:42 crc kubenswrapper[4812]: I0218 16:53:42.898969 4812 generic.go:334] "Generic (PLEG): container finished" podID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerID="3e47b6b85536793cb8c2784ed56ecb52cb5f702a16cfb2021ff30d6d4b28063e" exitCode=137 Feb 18 16:53:42 crc kubenswrapper[4812]: I0218 16:53:42.899077 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerDied","Data":"3e47b6b85536793cb8c2784ed56ecb52cb5f702a16cfb2021ff30d6d4b28063e"} Feb 18 16:53:43 crc kubenswrapper[4812]: E0218 16:53:43.240648 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:43 crc kubenswrapper[4812]: E0218 16:53:43.241803 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:43 crc kubenswrapper[4812]: E0218 16:53:43.246389 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 16:53:43 crc kubenswrapper[4812]: E0218 16:53:43.246452 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.311571 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.311664 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.312492 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e"} pod="openstack/horizon-544c585488-4dbfm" containerMessage="Container horizon failed startup probe, will be restarted" Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.312537 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" containerID="cri-o://d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" gracePeriod=30 Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.910302 4812 generic.go:334] "Generic (PLEG): container finished" podID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerID="8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0" exitCode=137 Feb 18 16:53:43 crc kubenswrapper[4812]: I0218 16:53:43.910375 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerDied","Data":"8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0"} Feb 18 16:53:43 crc kubenswrapper[4812]: E0218 16:53:43.973616 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bd26f3_3cea_437b_b253_3c8846e500c8.slice/crio-conmon-8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:53:44 crc kubenswrapper[4812]: I0218 16:53:44.921254 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8882306-a365-4ee4-adf2-e672b20ad942" containerID="bf48c949dabff07439306379dc67c2e9021f11ce35f6a0ac416d931ee3239a2e" exitCode=137 Feb 18 16:53:44 crc kubenswrapper[4812]: I0218 16:53:44.921299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerDied","Data":"bf48c949dabff07439306379dc67c2e9021f11ce35f6a0ac416d931ee3239a2e"} Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.717903 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.813792 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.836765 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjd8\" (UniqueName: \"kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8\") pod \"28bd26f3-3cea-437b-b253-3c8846e500c8\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.837003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data\") pod \"28bd26f3-3cea-437b-b253-3c8846e500c8\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.837030 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key\") pod \"28bd26f3-3cea-437b-b253-3c8846e500c8\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.837061 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs\") pod \"28bd26f3-3cea-437b-b253-3c8846e500c8\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.837143 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts\") pod \"28bd26f3-3cea-437b-b253-3c8846e500c8\" (UID: \"28bd26f3-3cea-437b-b253-3c8846e500c8\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.839754 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs" (OuterVolumeSpecName: "logs") pod "28bd26f3-3cea-437b-b253-3c8846e500c8" (UID: "28bd26f3-3cea-437b-b253-3c8846e500c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.872638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8" (OuterVolumeSpecName: "kube-api-access-7sjd8") pod "28bd26f3-3cea-437b-b253-3c8846e500c8" (UID: "28bd26f3-3cea-437b-b253-3c8846e500c8"). InnerVolumeSpecName "kube-api-access-7sjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.873015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "28bd26f3-3cea-437b-b253-3c8846e500c8" (UID: "28bd26f3-3cea-437b-b253-3c8846e500c8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.874205 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data" (OuterVolumeSpecName: "config-data") pod "28bd26f3-3cea-437b-b253-3c8846e500c8" (UID: "28bd26f3-3cea-437b-b253-3c8846e500c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.877454 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts" (OuterVolumeSpecName: "scripts") pod "28bd26f3-3cea-437b-b253-3c8846e500c8" (UID: "28bd26f3-3cea-437b-b253-3c8846e500c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.890354 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9512a70d-2793-4aac-bccc-4ed1d50aeb5b" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.932377 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b446b4fcc-sxzl8" event={"ID":"a8882306-a365-4ee4-adf2-e672b20ad942","Type":"ContainerDied","Data":"554e72ca644cb221445c9577a7bb13b8192bec4195b95acb4b1d23336baf1ac4"} Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.932388 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b446b4fcc-sxzl8" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.932440 4812 scope.go:117] "RemoveContainer" containerID="bf48c949dabff07439306379dc67c2e9021f11ce35f6a0ac416d931ee3239a2e" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.934541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ff894c77-sfq6d" event={"ID":"28bd26f3-3cea-437b-b253-3c8846e500c8","Type":"ContainerDied","Data":"45283a95d8f0b464f84530475c76f2c34ef1712da71083d1dffab97ec841b91e"} Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.934614 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ff894c77-sfq6d" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.938990 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data\") pod \"a8882306-a365-4ee4-adf2-e672b20ad942\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939015 4812 generic.go:334] "Generic (PLEG): container finished" podID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerID="9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" exitCode=137 Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939045 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97cr\" (UniqueName: \"kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr\") pod \"a8882306-a365-4ee4-adf2-e672b20ad942\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939081 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a2c86f83-422d-46f2-942d-608f3afacaa0","Type":"ContainerDied","Data":"9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4"} Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939133 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs\") pod \"a8882306-a365-4ee4-adf2-e672b20ad942\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939195 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts\") pod \"a8882306-a365-4ee4-adf2-e672b20ad942\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939317 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key\") pod \"a8882306-a365-4ee4-adf2-e672b20ad942\" (UID: \"a8882306-a365-4ee4-adf2-e672b20ad942\") " Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939760 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939780 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sjd8\" (UniqueName: \"kubernetes.io/projected/28bd26f3-3cea-437b-b253-3c8846e500c8-kube-api-access-7sjd8\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939794 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28bd26f3-3cea-437b-b253-3c8846e500c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939850 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28bd26f3-3cea-437b-b253-3c8846e500c8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.939860 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28bd26f3-3cea-437b-b253-3c8846e500c8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.940765 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs" (OuterVolumeSpecName: "logs") pod "a8882306-a365-4ee4-adf2-e672b20ad942" (UID: "a8882306-a365-4ee4-adf2-e672b20ad942"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.943242 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr" (OuterVolumeSpecName: "kube-api-access-f97cr") pod "a8882306-a365-4ee4-adf2-e672b20ad942" (UID: "a8882306-a365-4ee4-adf2-e672b20ad942"). InnerVolumeSpecName "kube-api-access-f97cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.944992 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a8882306-a365-4ee4-adf2-e672b20ad942" (UID: "a8882306-a365-4ee4-adf2-e672b20ad942"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.945226 4812 generic.go:334] "Generic (PLEG): container finished" podID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" exitCode=137 Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.945264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"19d13bf1-b3dc-405a-9240-6133d293f08a","Type":"ContainerDied","Data":"069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9"} Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.973833 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data" (OuterVolumeSpecName: "config-data") pod "a8882306-a365-4ee4-adf2-e672b20ad942" (UID: "a8882306-a365-4ee4-adf2-e672b20ad942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.977088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts" (OuterVolumeSpecName: "scripts") pod "a8882306-a365-4ee4-adf2-e672b20ad942" (UID: "a8882306-a365-4ee4-adf2-e672b20ad942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.979852 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:53:45 crc kubenswrapper[4812]: I0218 16:53:45.988656 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55ff894c77-sfq6d"] Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.042238 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.042293 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97cr\" (UniqueName: \"kubernetes.io/projected/a8882306-a365-4ee4-adf2-e672b20ad942-kube-api-access-f97cr\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.042311 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8882306-a365-4ee4-adf2-e672b20ad942-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.042322 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8882306-a365-4ee4-adf2-e672b20ad942-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.042336 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8882306-a365-4ee4-adf2-e672b20ad942-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.171141 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.177303 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="9512a70d-2793-4aac-bccc-4ed1d50aeb5b" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.169:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.274087 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.288515 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b446b4fcc-sxzl8"] Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.332397 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.333763 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.381291 4812 scope.go:117] "RemoveContainer" containerID="0891e31806567fdb7257011d32f9d82c4a29923428d62a78beb989188d1db2a5" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.397355 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.410019 4812 scope.go:117] "RemoveContainer" containerID="8c9d9f71d32622c1aebbdf70649f72f3f028ed959f1b5cf460de656f9fdc7bc0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.520277 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" path="/var/lib/kubelet/pods/28bd26f3-3cea-437b-b253-3c8846e500c8/volumes" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.521439 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" path="/var/lib/kubelet/pods/a8882306-a365-4ee4-adf2-e672b20ad942/volumes" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.551743 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwfhm\" (UniqueName: \"kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm\") pod \"a2c86f83-422d-46f2-942d-608f3afacaa0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.552119 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data\") pod \"a2c86f83-422d-46f2-942d-608f3afacaa0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.552219 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca\") pod \"a2c86f83-422d-46f2-942d-608f3afacaa0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.552342 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs\") pod \"a2c86f83-422d-46f2-942d-608f3afacaa0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.552382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle\") pod \"a2c86f83-422d-46f2-942d-608f3afacaa0\" (UID: \"a2c86f83-422d-46f2-942d-608f3afacaa0\") " Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.553012 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs" (OuterVolumeSpecName: "logs") pod "a2c86f83-422d-46f2-942d-608f3afacaa0" (UID: "a2c86f83-422d-46f2-942d-608f3afacaa0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.557629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm" (OuterVolumeSpecName: "kube-api-access-rwfhm") pod "a2c86f83-422d-46f2-942d-608f3afacaa0" (UID: "a2c86f83-422d-46f2-942d-608f3afacaa0"). InnerVolumeSpecName "kube-api-access-rwfhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.584494 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2c86f83-422d-46f2-942d-608f3afacaa0" (UID: "a2c86f83-422d-46f2-942d-608f3afacaa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.587355 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a2c86f83-422d-46f2-942d-608f3afacaa0" (UID: "a2c86f83-422d-46f2-942d-608f3afacaa0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.654914 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c86f83-422d-46f2-942d-608f3afacaa0-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.654954 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.654969 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwfhm\" (UniqueName: \"kubernetes.io/projected/a2c86f83-422d-46f2-942d-608f3afacaa0-kube-api-access-rwfhm\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.654982 4812 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.689431 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data" (OuterVolumeSpecName: "config-data") pod "a2c86f83-422d-46f2-942d-608f3afacaa0" (UID: "a2c86f83-422d-46f2-942d-608f3afacaa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.757543 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c86f83-422d-46f2-942d-608f3afacaa0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.829025 4812 scope.go:117] "RemoveContainer" containerID="3e47b6b85536793cb8c2784ed56ecb52cb5f702a16cfb2021ff30d6d4b28063e" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.959359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a2c86f83-422d-46f2-942d-608f3afacaa0","Type":"ContainerDied","Data":"7706016e7d230f4beac91727a801d9dfeee7ffbd3e73017ff7710b40d9741ec4"} Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.959526 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:46 crc kubenswrapper[4812]: I0218 16:53:46.974628 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.025772 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.037015 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.057419 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:53:47 crc kubenswrapper[4812]: E0218 16:53:47.058404 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerName="watcher-decision-engine" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058437 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerName="watcher-decision-engine" Feb 18 16:53:47 crc kubenswrapper[4812]: E0218 16:53:47.058463 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058472 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: E0218 16:53:47.058505 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058516 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: E0218 16:53:47.058535 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058544 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: E0218 16:53:47.058568 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058576 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058784 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058807 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon-log" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058827 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8882306-a365-4ee4-adf2-e672b20ad942" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058841 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bd26f3-3cea-437b-b253-3c8846e500c8" containerName="horizon" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.058856 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" containerName="watcher-decision-engine" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.059791 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.064164 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.064780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.064837 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.064917 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.064972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75rj\" (UniqueName: \"kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.065060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.091518 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.165855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.166013 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.166045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.166095 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.166143 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75rj\" (UniqueName: \"kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.166575 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.170580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.171950 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.180137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.184579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75rj\" (UniqueName: \"kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj\") pod \"watcher-decision-engine-0\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:53:47 crc kubenswrapper[4812]: I0218 16:53:47.384863 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.129619 4812 scope.go:117] "RemoveContainer" containerID="9765b903ff71963ad7b1cf0f9be474cd631aa0fa237068decc63b405e5c20ec4" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.219604 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.305720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg7kr\" (UniqueName: \"kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr\") pod \"19d13bf1-b3dc-405a-9240-6133d293f08a\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.305843 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs\") pod \"19d13bf1-b3dc-405a-9240-6133d293f08a\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.305889 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data\") pod \"19d13bf1-b3dc-405a-9240-6133d293f08a\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.306325 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs" (OuterVolumeSpecName: "logs") pod "19d13bf1-b3dc-405a-9240-6133d293f08a" (UID: "19d13bf1-b3dc-405a-9240-6133d293f08a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.306683 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle\") pod \"19d13bf1-b3dc-405a-9240-6133d293f08a\" (UID: \"19d13bf1-b3dc-405a-9240-6133d293f08a\") " Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.307061 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d13bf1-b3dc-405a-9240-6133d293f08a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.313627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr" (OuterVolumeSpecName: "kube-api-access-dg7kr") pod "19d13bf1-b3dc-405a-9240-6133d293f08a" (UID: "19d13bf1-b3dc-405a-9240-6133d293f08a"). InnerVolumeSpecName "kube-api-access-dg7kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.343789 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19d13bf1-b3dc-405a-9240-6133d293f08a" (UID: "19d13bf1-b3dc-405a-9240-6133d293f08a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.359644 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data" (OuterVolumeSpecName: "config-data") pod "19d13bf1-b3dc-405a-9240-6133d293f08a" (UID: "19d13bf1-b3dc-405a-9240-6133d293f08a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.407790 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.407830 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d13bf1-b3dc-405a-9240-6133d293f08a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.407844 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg7kr\" (UniqueName: \"kubernetes.io/projected/19d13bf1-b3dc-405a-9240-6133d293f08a-kube-api-access-dg7kr\") on node \"crc\" DevicePath \"\"" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.521000 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c86f83-422d-46f2-942d-608f3afacaa0" path="/var/lib/kubelet/pods/a2c86f83-422d-46f2-942d-608f3afacaa0/volumes" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.737320 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-dbd455b84-x6fxk" podUID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.737415 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.738321 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"02d536988c0fbe4e2fcb4e41398938c94741262bff09c439b4886dde9223a6ae"} pod="openstack/horizon-dbd455b84-x6fxk" containerMessage="Container horizon failed startup probe, will be restarted" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.738365 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-dbd455b84-x6fxk" podUID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerName="horizon" containerID="cri-o://02d536988c0fbe4e2fcb4e41398938c94741262bff09c439b4886dde9223a6ae" gracePeriod=30 Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.984745 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:53:48 crc kubenswrapper[4812]: I0218 16:53:48.985267 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"19d13bf1-b3dc-405a-9240-6133d293f08a","Type":"ContainerDied","Data":"bee5058fc9f09a26018462a316e284c23beff8bb749a2c9c566522f3aab5ec55"} Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.010391 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.013427 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.037523 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:53:49 crc kubenswrapper[4812]: E0218 16:53:49.038050 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.038074 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.038357 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" containerName="watcher-applier" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.039236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.048041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.054770 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.221392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-logs\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.221468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.222036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjjc\" (UniqueName: \"kubernetes.io/projected/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-kube-api-access-rgjjc\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.222120 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-config-data\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.323905 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-logs\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.324011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.324126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjjc\" (UniqueName: \"kubernetes.io/projected/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-kube-api-access-rgjjc\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.324168 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-config-data\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.324546 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-logs\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.328457 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.328691 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-config-data\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.387858 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjjc\" (UniqueName: \"kubernetes.io/projected/5f700e3b-d59f-4f8b-8ad0-845f2f5cb651-kube-api-access-rgjjc\") pod \"watcher-applier-0\" (UID: \"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651\") " pod="openstack/watcher-applier-0" Feb 18 16:53:49 crc kubenswrapper[4812]: I0218 16:53:49.667400 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 16:53:50 crc kubenswrapper[4812]: I0218 16:53:50.520682 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d13bf1-b3dc-405a-9240-6133d293f08a" path="/var/lib/kubelet/pods/19d13bf1-b3dc-405a-9240-6133d293f08a/volumes" Feb 18 16:53:53 crc kubenswrapper[4812]: I0218 16:53:53.251309 4812 patch_prober.go:28] interesting pod/router-default-5444994796-xs668 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 16:53:53 crc kubenswrapper[4812]: I0218 16:53:53.251671 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xs668" podUID="583bfa7b-2d34-43e6-9fd2-1e15d8e5f94b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:54:00 crc kubenswrapper[4812]: I0218 16:54:00.622501 4812 scope.go:117] "RemoveContainer" containerID="069c63e318a3f7fd117f4055f8d2436324a75ed201a774ee9a8866d74ac59df9" Feb 18 16:54:00 crc kubenswrapper[4812]: E0218 16:54:00.890215 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 18 16:54:00 crc kubenswrapper[4812]: E0218 16:54:00.890701 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bdd340b-a57b-435b-b34b-a47c31b54c79): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 16:54:00 crc kubenswrapper[4812]: E0218 16:54:00.892155 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.098669 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerStarted","Data":"60d649a8ed11dd003ce96e889e4961cf46537f9e4265471ae303f68294c3e34c"} Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.098740 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" containerName="sg-core" containerID="cri-o://7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545" gracePeriod=30 Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.130084 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-768tv" podStartSLOduration=10.187181494 podStartE2EDuration="34.130060672s" podCreationTimestamp="2026-02-18 16:53:27 +0000 UTC" firstStartedPulling="2026-02-18 16:53:35.687150643 +0000 UTC m=+1435.952761552" lastFinishedPulling="2026-02-18 16:53:59.630029811 +0000 UTC m=+1459.895640730" observedRunningTime="2026-02-18 16:54:01.12056211 +0000 UTC m=+1461.386173019" watchObservedRunningTime="2026-02-18 16:54:01.130060672 +0000 UTC m=+1461.395671581" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.173601 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 16:54:01 crc kubenswrapper[4812]: W0218 16:54:01.178031 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f700e3b_d59f_4f8b_8ad0_845f2f5cb651.slice/crio-a0226f2d105a5dd0edf5e52bc4c155e0c7a1099360aeb454e93c0fb9819008ea WatchSource:0}: Error finding container a0226f2d105a5dd0edf5e52bc4c155e0c7a1099360aeb454e93c0fb9819008ea: Status 404 returned error can't find the container with id a0226f2d105a5dd0edf5e52bc4c155e0c7a1099360aeb454e93c0fb9819008ea Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.193663 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.522301 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.661144 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.661321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.661395 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29l7q\" (UniqueName: \"kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.661431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.661461 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.662055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.662175 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts\") pod \"4bdd340b-a57b-435b-b34b-a47c31b54c79\" (UID: \"4bdd340b-a57b-435b-b34b-a47c31b54c79\") " Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.662274 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.662339 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.663295 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.663318 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bdd340b-a57b-435b-b34b-a47c31b54c79-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.666020 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts" (OuterVolumeSpecName: "scripts") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.666539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.666792 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q" (OuterVolumeSpecName: "kube-api-access-29l7q") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "kube-api-access-29l7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.667290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data" (OuterVolumeSpecName: "config-data") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.701968 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bdd340b-a57b-435b-b34b-a47c31b54c79" (UID: "4bdd340b-a57b-435b-b34b-a47c31b54c79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.765958 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.765991 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.766002 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29l7q\" (UniqueName: \"kubernetes.io/projected/4bdd340b-a57b-435b-b34b-a47c31b54c79-kube-api-access-29l7q\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.766013 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:01 crc kubenswrapper[4812]: I0218 16:54:01.766022 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bdd340b-a57b-435b-b34b-a47c31b54c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.108759 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"15072f74-894a-40ee-9609-d58e29a27de8","Type":"ContainerStarted","Data":"aed82da0ce27d1d06780ffdb8739312c7f876f9006f674bf15d457529bf8b160"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.108804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"15072f74-894a-40ee-9609-d58e29a27de8","Type":"ContainerStarted","Data":"d19c97982f4b747f17a8d451cdbcc5b98c5488d6ffc742dc9fc352f3bc7e513c"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.110425 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bdd340b-a57b-435b-b34b-a47c31b54c79" containerID="7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545" exitCode=2 Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.110659 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.111595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bdd340b-a57b-435b-b34b-a47c31b54c79","Type":"ContainerDied","Data":"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.111629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bdd340b-a57b-435b-b34b-a47c31b54c79","Type":"ContainerDied","Data":"4bb14630cd545dbe03086192c3a6cc2cac27f1df74118084d0e0213429efd09a"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.111648 4812 scope.go:117] "RemoveContainer" containerID="7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.116247 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651","Type":"ContainerStarted","Data":"4ff25f7c2bcd9ca0eb9b2a927de0fd0ebf328b51d8407a7afdda449870f7d5fe"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.116284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5f700e3b-d59f-4f8b-8ad0-845f2f5cb651","Type":"ContainerStarted","Data":"a0226f2d105a5dd0edf5e52bc4c155e0c7a1099360aeb454e93c0fb9819008ea"} Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.125451 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=15.125433441 podStartE2EDuration="15.125433441s" podCreationTimestamp="2026-02-18 16:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:54:02.125032442 +0000 UTC m=+1462.390643361" watchObservedRunningTime="2026-02-18 16:54:02.125433441 +0000 UTC m=+1462.391044340" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.140214 4812 scope.go:117] "RemoveContainer" containerID="7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545" Feb 18 16:54:02 crc kubenswrapper[4812]: E0218 16:54:02.140718 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545\": container with ID starting with 7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545 not found: ID does not exist" containerID="7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.140891 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545"} err="failed to get container status \"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545\": rpc error: code = NotFound desc = could not find container \"7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545\": container with ID starting with 7cddaf8949b600dd9229407aa2ba3add8f0b761445778138a499996e6a7c9545 not found: ID does not exist" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.176664 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=13.176636955 podStartE2EDuration="13.176636955s" podCreationTimestamp="2026-02-18 16:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:54:02.164049807 +0000 UTC m=+1462.429660716" watchObservedRunningTime="2026-02-18 16:54:02.176636955 +0000 UTC m=+1462.442247874" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.232241 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.243948 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.261621 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:02 crc kubenswrapper[4812]: E0218 16:54:02.262097 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" containerName="sg-core" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.262130 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" containerName="sg-core" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.262362 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" containerName="sg-core" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.266054 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.270939 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.271984 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.310574 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.380785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.386888 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlk7j\" (UniqueName: \"kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.387059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.387250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.387389 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.387523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.387552 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.395607 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:02 crc kubenswrapper[4812]: E0218 16:54:02.396606 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-jlk7j log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="49f95bde-3d45-4d96-9a6a-42efd75ea450" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlk7j\" (UniqueName: \"kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488812 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488915 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.488959 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.489600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.489628 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.499415 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.499565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.500420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.501639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.511421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlk7j\" (UniqueName: \"kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j\") pod \"ceilometer-0\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " pod="openstack/ceilometer-0" Feb 18 16:54:02 crc kubenswrapper[4812]: I0218 16:54:02.519972 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdd340b-a57b-435b-b34b-a47c31b54c79" path="/var/lib/kubelet/pods/4bdd340b-a57b-435b-b34b-a47c31b54c79/volumes" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.128088 4812 generic.go:334] "Generic (PLEG): container finished" podID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerID="02d536988c0fbe4e2fcb4e41398938c94741262bff09c439b4886dde9223a6ae" exitCode=0 Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.128244 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.129079 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dbd455b84-x6fxk" event={"ID":"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2","Type":"ContainerDied","Data":"02d536988c0fbe4e2fcb4e41398938c94741262bff09c439b4886dde9223a6ae"} Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.129127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dbd455b84-x6fxk" event={"ID":"11a958b4-3c26-4d73-acfa-fb3fb4c08cb2","Type":"ContainerStarted","Data":"b734a799021b3cea442437888447606e1727f9f984dc5a8a125d41c7816dd7bc"} Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.139689 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.300846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301408 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlk7j\" (UniqueName: \"kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301492 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301606 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301674 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301725 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301843 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd\") pod \"49f95bde-3d45-4d96-9a6a-42efd75ea450\" (UID: \"49f95bde-3d45-4d96-9a6a-42efd75ea450\") " Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.301911 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.302152 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.303406 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.303600 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49f95bde-3d45-4d96-9a6a-42efd75ea450-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.305294 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data" (OuterVolumeSpecName: "config-data") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.305413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.305413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j" (OuterVolumeSpecName: "kube-api-access-jlk7j") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "kube-api-access-jlk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.305833 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts" (OuterVolumeSpecName: "scripts") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.310407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49f95bde-3d45-4d96-9a6a-42efd75ea450" (UID: "49f95bde-3d45-4d96-9a6a-42efd75ea450"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.406075 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.406155 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlk7j\" (UniqueName: \"kubernetes.io/projected/49f95bde-3d45-4d96-9a6a-42efd75ea450-kube-api-access-jlk7j\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.406172 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.406184 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.406203 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49f95bde-3d45-4d96-9a6a-42efd75ea450-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.730723 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:54:03 crc kubenswrapper[4812]: I0218 16:54:03.730772 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.135305 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.216833 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.241464 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.261211 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.267484 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.269679 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.270527 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.282452 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.430871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.430919 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.430962 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqsc\" (UniqueName: \"kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.431022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.431177 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.431287 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.431311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.520046 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f95bde-3d45-4d96-9a6a-42efd75ea450" path="/var/lib/kubelet/pods/49f95bde-3d45-4d96-9a6a-42efd75ea450/volumes" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.532800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.532855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.532904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.532928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.532986 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.533017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.533062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqsc\" (UniqueName: \"kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.533364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.533423 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.538713 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.538910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.539131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.539673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.552218 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqsc\" (UniqueName: \"kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc\") pod \"ceilometer-0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.590085 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:54:04 crc kubenswrapper[4812]: I0218 16:54:04.668192 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 16:54:05 crc kubenswrapper[4812]: I0218 16:54:05.015390 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:54:05 crc kubenswrapper[4812]: W0218 16:54:05.020990 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d1389b5_c52f_440e_af95_996ecdc720f0.slice/crio-c4b6674beab123fbc2f3bc9812b31b99e4e246d49cae7bda88f3c2dc3241b5f8 WatchSource:0}: Error finding container c4b6674beab123fbc2f3bc9812b31b99e4e246d49cae7bda88f3c2dc3241b5f8: Status 404 returned error can't find the container with id c4b6674beab123fbc2f3bc9812b31b99e4e246d49cae7bda88f3c2dc3241b5f8 Feb 18 16:54:05 crc kubenswrapper[4812]: I0218 16:54:05.147138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerStarted","Data":"c4b6674beab123fbc2f3bc9812b31b99e4e246d49cae7bda88f3c2dc3241b5f8"} Feb 18 16:54:07 crc kubenswrapper[4812]: I0218 16:54:07.386987 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 16:54:07 crc kubenswrapper[4812]: I0218 16:54:07.433923 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 16:54:07 crc kubenswrapper[4812]: I0218 16:54:07.677668 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:07 crc kubenswrapper[4812]: I0218 16:54:07.678065 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:07 crc kubenswrapper[4812]: I0218 16:54:07.727650 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:08 crc kubenswrapper[4812]: I0218 16:54:08.176089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerStarted","Data":"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c"} Feb 18 16:54:08 crc kubenswrapper[4812]: I0218 16:54:08.176552 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 16:54:08 crc kubenswrapper[4812]: I0218 16:54:08.213828 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 16:54:08 crc kubenswrapper[4812]: I0218 16:54:08.225274 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:09 crc kubenswrapper[4812]: I0218 16:54:09.667979 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 16:54:09 crc kubenswrapper[4812]: I0218 16:54:09.697163 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 16:54:10 crc kubenswrapper[4812]: I0218 16:54:10.214753 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 16:54:11 crc kubenswrapper[4812]: I0218 16:54:11.534080 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:54:11 crc kubenswrapper[4812]: I0218 16:54:11.534325 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-768tv" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="registry-server" containerID="cri-o://60d649a8ed11dd003ce96e889e4961cf46537f9e4265471ae303f68294c3e34c" gracePeriod=2 Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.228496 4812 generic.go:334] "Generic (PLEG): container finished" podID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerID="60d649a8ed11dd003ce96e889e4961cf46537f9e4265471ae303f68294c3e34c" exitCode=0 Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.228954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerDied","Data":"60d649a8ed11dd003ce96e889e4961cf46537f9e4265471ae303f68294c3e34c"} Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.382306 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.504775 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities\") pod \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.504907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content\") pod \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.504960 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjgq4\" (UniqueName: \"kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4\") pod \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\" (UID: \"d9c27d65-406b-4ab3-960a-0f02e6ae1746\") " Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.506525 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities" (OuterVolumeSpecName: "utilities") pod "d9c27d65-406b-4ab3-960a-0f02e6ae1746" (UID: "d9c27d65-406b-4ab3-960a-0f02e6ae1746"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.516799 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4" (OuterVolumeSpecName: "kube-api-access-wjgq4") pod "d9c27d65-406b-4ab3-960a-0f02e6ae1746" (UID: "d9c27d65-406b-4ab3-960a-0f02e6ae1746"). InnerVolumeSpecName "kube-api-access-wjgq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.557896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9c27d65-406b-4ab3-960a-0f02e6ae1746" (UID: "d9c27d65-406b-4ab3-960a-0f02e6ae1746"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.607429 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.607460 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjgq4\" (UniqueName: \"kubernetes.io/projected/d9c27d65-406b-4ab3-960a-0f02e6ae1746-kube-api-access-wjgq4\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.607471 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9c27d65-406b-4ab3-960a-0f02e6ae1746-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:13 crc kubenswrapper[4812]: I0218 16:54:13.732583 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-dbd455b84-x6fxk" podUID="11a958b4-3c26-4d73-acfa-fb3fb4c08cb2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.240845 4812 generic.go:334] "Generic (PLEG): container finished" podID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerID="d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" exitCode=137 Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.240910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerDied","Data":"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e"} Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.245450 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-768tv" event={"ID":"d9c27d65-406b-4ab3-960a-0f02e6ae1746","Type":"ContainerDied","Data":"659c0b786ce9a59ecf1c75e6cb1da982a2c1770d0d0c10964a2ffcf539a8c0fd"} Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.245489 4812 scope.go:117] "RemoveContainer" containerID="60d649a8ed11dd003ce96e889e4961cf46537f9e4265471ae303f68294c3e34c" Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.245563 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-768tv" Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.292909 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.301322 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-768tv"] Feb 18 16:54:14 crc kubenswrapper[4812]: I0218 16:54:14.552565 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" path="/var/lib/kubelet/pods/d9c27d65-406b-4ab3-960a-0f02e6ae1746/volumes" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.748019 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:54:17 crc kubenswrapper[4812]: E0218 16:54:17.749466 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="extract-utilities" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.749487 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="extract-utilities" Feb 18 16:54:17 crc kubenswrapper[4812]: E0218 16:54:17.749505 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="extract-content" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.749513 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="extract-content" Feb 18 16:54:17 crc kubenswrapper[4812]: E0218 16:54:17.749532 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="registry-server" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.749540 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="registry-server" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.749716 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c27d65-406b-4ab3-960a-0f02e6ae1746" containerName="registry-server" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.750980 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.765468 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.875263 4812 scope.go:117] "RemoveContainer" containerID="075caec4b419f1248e83076127514ef25ba794f4106521e911c6b23a1c35b3ad" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.907741 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.907928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqflh\" (UniqueName: \"kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.908032 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:17 crc kubenswrapper[4812]: I0218 16:54:17.908714 4812 scope.go:117] "RemoveContainer" containerID="bc7b05d636778dbe485e7cb83be92c7036e298eea0513ba43bd9e86fda11efab" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.009530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.009613 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqflh\" (UniqueName: \"kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.009651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.010284 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.010607 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.035017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqflh\" (UniqueName: \"kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh\") pod \"redhat-operators-jfm29\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.083624 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:18 crc kubenswrapper[4812]: I0218 16:54:18.631419 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:54:19 crc kubenswrapper[4812]: I0218 16:54:19.324706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerStarted","Data":"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468"} Feb 18 16:54:19 crc kubenswrapper[4812]: I0218 16:54:19.325170 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerStarted","Data":"9de23ebe7bc41f7da05069ede60f7711b6eb65038deffbcacea54ef69595dfd7"} Feb 18 16:54:19 crc kubenswrapper[4812]: I0218 16:54:19.327392 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerStarted","Data":"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a"} Feb 18 16:54:19 crc kubenswrapper[4812]: I0218 16:54:19.330703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerStarted","Data":"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2"} Feb 18 16:54:20 crc kubenswrapper[4812]: I0218 16:54:20.119013 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65547bbfff-9ppm5" Feb 18 16:54:20 crc kubenswrapper[4812]: I0218 16:54:20.345021 4812 generic.go:334] "Generic (PLEG): container finished" podID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerID="f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468" exitCode=0 Feb 18 16:54:20 crc kubenswrapper[4812]: I0218 16:54:20.345162 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerDied","Data":"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468"} Feb 18 16:54:21 crc kubenswrapper[4812]: I0218 16:54:21.359607 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerStarted","Data":"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069"} Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.310719 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.310801 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.689596 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.692934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.698444 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.698552 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.698820 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-w4d4p" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.713930 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.857757 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config-secret\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.857995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9rz\" (UniqueName: \"kubernetes.io/projected/22112483-fba0-45a2-90d1-5f35b199a471-kube-api-access-hz9rz\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.858052 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.858125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.959582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config-secret\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.959667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9rz\" (UniqueName: \"kubernetes.io/projected/22112483-fba0-45a2-90d1-5f35b199a471-kube-api-access-hz9rz\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.959695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.959721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.960602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.966485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-openstack-config-secret\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.976059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22112483-fba0-45a2-90d1-5f35b199a471-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:23 crc kubenswrapper[4812]: I0218 16:54:23.986637 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9rz\" (UniqueName: \"kubernetes.io/projected/22112483-fba0-45a2-90d1-5f35b199a471-kube-api-access-hz9rz\") pod \"openstackclient\" (UID: \"22112483-fba0-45a2-90d1-5f35b199a471\") " pod="openstack/openstackclient" Feb 18 16:54:24 crc kubenswrapper[4812]: I0218 16:54:24.030081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 16:54:26 crc kubenswrapper[4812]: I0218 16:54:25.946533 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:54:27 crc kubenswrapper[4812]: I0218 16:54:27.767811 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-dbd455b84-x6fxk" Feb 18 16:54:27 crc kubenswrapper[4812]: I0218 16:54:27.839496 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:54:27 crc kubenswrapper[4812]: I0218 16:54:27.839818 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon-log" containerID="cri-o://b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" gracePeriod=30 Feb 18 16:54:27 crc kubenswrapper[4812]: I0218 16:54:27.839991 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-544c585488-4dbfm" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" containerID="cri-o://f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" gracePeriod=30 Feb 18 16:54:28 crc kubenswrapper[4812]: I0218 16:54:28.438485 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerStarted","Data":"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7"} Feb 18 16:54:28 crc kubenswrapper[4812]: I0218 16:54:28.440732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerStarted","Data":"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee"} Feb 18 16:54:28 crc kubenswrapper[4812]: I0218 16:54:28.657672 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 16:54:29 crc kubenswrapper[4812]: I0218 16:54:29.450337 4812 generic.go:334] "Generic (PLEG): container finished" podID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerID="eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee" exitCode=0 Feb 18 16:54:29 crc kubenswrapper[4812]: I0218 16:54:29.450710 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerDied","Data":"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee"} Feb 18 16:54:29 crc kubenswrapper[4812]: I0218 16:54:29.456019 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22112483-fba0-45a2-90d1-5f35b199a471","Type":"ContainerStarted","Data":"03e233ba66a6cf67169ceaa55467c2b641d21783b4a7d835001c3e6ce76e0ebb"} Feb 18 16:54:29 crc kubenswrapper[4812]: I0218 16:54:29.456154 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:54:29 crc kubenswrapper[4812]: I0218 16:54:29.496701 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5351440419999998 podStartE2EDuration="25.496679333s" podCreationTimestamp="2026-02-18 16:54:04 +0000 UTC" firstStartedPulling="2026-02-18 16:54:05.025039025 +0000 UTC m=+1465.290649934" lastFinishedPulling="2026-02-18 16:54:27.986574316 +0000 UTC m=+1488.252185225" observedRunningTime="2026-02-18 16:54:29.48794681 +0000 UTC m=+1489.753557729" watchObservedRunningTime="2026-02-18 16:54:29.496679333 +0000 UTC m=+1489.762290242" Feb 18 16:54:33 crc kubenswrapper[4812]: I0218 16:54:33.413700 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:54:33 crc kubenswrapper[4812]: I0218 16:54:33.414354 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:54:55 crc kubenswrapper[4812]: E0218 16:54:55.523030 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 18 16:54:55 crc kubenswrapper[4812]: E0218 16:54:55.523775 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n579h689h555h68bhfch597h86h5h6dhdbhbfh644hbh5fh579h94hd8hd5h544h576h89h585h5bbhdbh557h4h7dh556h5ddhfch8h9fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hz9rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(22112483-fba0-45a2-90d1-5f35b199a471): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 16:54:55 crc kubenswrapper[4812]: E0218 16:54:55.524878 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="22112483-fba0-45a2-90d1-5f35b199a471" Feb 18 16:54:55 crc kubenswrapper[4812]: E0218 16:54:55.777476 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="22112483-fba0-45a2-90d1-5f35b199a471" Feb 18 16:54:56 crc kubenswrapper[4812]: I0218 16:54:56.790375 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerStarted","Data":"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb"} Feb 18 16:54:56 crc kubenswrapper[4812]: I0218 16:54:56.823982 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jfm29" podStartSLOduration=4.723843274 podStartE2EDuration="39.823945698s" podCreationTimestamp="2026-02-18 16:54:17 +0000 UTC" firstStartedPulling="2026-02-18 16:54:20.348900968 +0000 UTC m=+1480.614511877" lastFinishedPulling="2026-02-18 16:54:55.449003392 +0000 UTC m=+1515.714614301" observedRunningTime="2026-02-18 16:54:56.818273797 +0000 UTC m=+1517.083884706" watchObservedRunningTime="2026-02-18 16:54:56.823945698 +0000 UTC m=+1517.089556607" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.084482 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.085043 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.292333 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379476 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379626 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379698 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrfpj\" (UniqueName: \"kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.379879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key\") pod \"f905c17d-31e8-4e36-a13a-ccc837408c9f\" (UID: \"f905c17d-31e8-4e36-a13a-ccc837408c9f\") " Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.380942 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs" (OuterVolumeSpecName: "logs") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.387712 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.388615 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj" (OuterVolumeSpecName: "kube-api-access-mrfpj") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "kube-api-access-mrfpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.409431 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.423924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data" (OuterVolumeSpecName: "config-data") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.437929 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.438988 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts" (OuterVolumeSpecName: "scripts") pod "f905c17d-31e8-4e36-a13a-ccc837408c9f" (UID: "f905c17d-31e8-4e36-a13a-ccc837408c9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482348 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482555 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f905c17d-31e8-4e36-a13a-ccc837408c9f-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482654 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482711 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482773 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f905c17d-31e8-4e36-a13a-ccc837408c9f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482832 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f905c17d-31e8-4e36-a13a-ccc837408c9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.482886 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrfpj\" (UniqueName: \"kubernetes.io/projected/f905c17d-31e8-4e36-a13a-ccc837408c9f-kube-api-access-mrfpj\") on node \"crc\" DevicePath \"\"" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.809870 4812 generic.go:334] "Generic (PLEG): container finished" podID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerID="f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" exitCode=137 Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.810694 4812 generic.go:334] "Generic (PLEG): container finished" podID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerID="b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" exitCode=137 Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.810350 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerDied","Data":"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a"} Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.810334 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-544c585488-4dbfm" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.810913 4812 scope.go:117] "RemoveContainer" containerID="f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.810899 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerDied","Data":"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41"} Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.811003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-544c585488-4dbfm" event={"ID":"f905c17d-31e8-4e36-a13a-ccc837408c9f","Type":"ContainerDied","Data":"ee169314aac798664ff1e121bb0f86e438b33a9ff70623371179b99e61e05f05"} Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.812990 4812 generic.go:334] "Generic (PLEG): container finished" podID="0a8de8dc-9b45-45b4-88bb-316168633d73" containerID="bc74b5bf4c3d20bae26ba5febbcb47a79e9df639094d0216b433df9bcb32acf0" exitCode=0 Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.813085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rtd9r" event={"ID":"0a8de8dc-9b45-45b4-88bb-316168633d73","Type":"ContainerDied","Data":"bc74b5bf4c3d20bae26ba5febbcb47a79e9df639094d0216b433df9bcb32acf0"} Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.817900 4812 generic.go:334] "Generic (PLEG): container finished" podID="09eb0e05-320a-463b-85cd-e1e387bb2610" containerID="974ec9d8d111fdb8c79fa08ce8f123e8211389121a03705501c003f02cab124e" exitCode=0 Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.817959 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnx6" event={"ID":"09eb0e05-320a-463b-85cd-e1e387bb2610","Type":"ContainerDied","Data":"974ec9d8d111fdb8c79fa08ce8f123e8211389121a03705501c003f02cab124e"} Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.857702 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-db576bcfc-pcjbk"] Feb 18 16:54:58 crc kubenswrapper[4812]: E0218 16:54:58.858261 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858283 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: E0218 16:54:58.858325 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon-log" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858331 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon-log" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858494 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858523 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon-log" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858533 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: E0218 16:54:58.858879 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.858895 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" containerName="horizon" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.860276 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.867851 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-db576bcfc-pcjbk"] Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.870959 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.871266 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.871447 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.876764 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.883902 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-544c585488-4dbfm"] Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-internal-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991521 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-etc-swift\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-public-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991595 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-log-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991615 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-config-data\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991695 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-combined-ca-bundle\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdpx\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-kube-api-access-4tdpx\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:58 crc kubenswrapper[4812]: I0218 16:54:58.991984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-run-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.007370 4812 scope.go:117] "RemoveContainer" containerID="d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093146 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-run-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093233 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-internal-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093275 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-etc-swift\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-public-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093334 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-log-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093356 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-config-data\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093420 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-combined-ca-bundle\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.093449 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdpx\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-kube-api-access-4tdpx\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.094263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-run-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.094760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-log-httpd\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.098190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-internal-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.098483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-config-data\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.099151 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-etc-swift\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.099769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-combined-ca-bundle\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.115887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-public-tls-certs\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.129314 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdpx\" (UniqueName: \"kubernetes.io/projected/b814aa4e-5f04-4919-bfb3-153dd88e6ef8-kube-api-access-4tdpx\") pod \"swift-proxy-db576bcfc-pcjbk\" (UID: \"b814aa4e-5f04-4919-bfb3-153dd88e6ef8\") " pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.133374 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfm29" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" probeResult="failure" output=< Feb 18 16:54:59 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:54:59 crc kubenswrapper[4812]: > Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.182060 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.184261 4812 scope.go:117] "RemoveContainer" containerID="b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.244332 4812 scope.go:117] "RemoveContainer" containerID="f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" Feb 18 16:54:59 crc kubenswrapper[4812]: E0218 16:54:59.244926 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a\": container with ID starting with f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a not found: ID does not exist" containerID="f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.244983 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a"} err="failed to get container status \"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a\": rpc error: code = NotFound desc = could not find container \"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a\": container with ID starting with f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.245022 4812 scope.go:117] "RemoveContainer" containerID="d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" Feb 18 16:54:59 crc kubenswrapper[4812]: E0218 16:54:59.245437 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e\": container with ID starting with d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e not found: ID does not exist" containerID="d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.245506 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e"} err="failed to get container status \"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e\": rpc error: code = NotFound desc = could not find container \"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e\": container with ID starting with d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.245533 4812 scope.go:117] "RemoveContainer" containerID="b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" Feb 18 16:54:59 crc kubenswrapper[4812]: E0218 16:54:59.246020 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41\": container with ID starting with b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41 not found: ID does not exist" containerID="b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246057 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41"} err="failed to get container status \"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41\": rpc error: code = NotFound desc = could not find container \"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41\": container with ID starting with b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41 not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246079 4812 scope.go:117] "RemoveContainer" containerID="f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246451 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a"} err="failed to get container status \"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a\": rpc error: code = NotFound desc = could not find container \"f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a\": container with ID starting with f7b43234217e276889dea9197fe38c89443c76499d49a28690bc24efa27bee6a not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246480 4812 scope.go:117] "RemoveContainer" containerID="d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246736 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e"} err="failed to get container status \"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e\": rpc error: code = NotFound desc = could not find container \"d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e\": container with ID starting with d1d5120065986c150ecc7cadc91eea9a1f277df4e5a357ee7a270d6fa0bebe4e not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.246762 4812 scope.go:117] "RemoveContainer" containerID="b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.247028 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41"} err="failed to get container status \"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41\": rpc error: code = NotFound desc = could not find container \"b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41\": container with ID starting with b17217f5d36fce4963064c9f819855add62c78a5ddf35e9ff3f112bdf44d6b41 not found: ID does not exist" Feb 18 16:54:59 crc kubenswrapper[4812]: I0218 16:54:59.856401 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-db576bcfc-pcjbk"] Feb 18 16:54:59 crc kubenswrapper[4812]: W0218 16:54:59.859442 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb814aa4e_5f04_4919_bfb3_153dd88e6ef8.slice/crio-0302ef3a733d718c373242b52f27226796a9647e55482067b78cd041e6665e1f WatchSource:0}: Error finding container 0302ef3a733d718c373242b52f27226796a9647e55482067b78cd041e6665e1f: Status 404 returned error can't find the container with id 0302ef3a733d718c373242b52f27226796a9647e55482067b78cd041e6665e1f Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.100668 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnx6" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.212312 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts\") pod \"09eb0e05-320a-463b-85cd-e1e387bb2610\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.212396 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p84c\" (UniqueName: \"kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c\") pod \"09eb0e05-320a-463b-85cd-e1e387bb2610\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.212501 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs\") pod \"09eb0e05-320a-463b-85cd-e1e387bb2610\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.212572 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle\") pod \"09eb0e05-320a-463b-85cd-e1e387bb2610\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.212614 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data\") pod \"09eb0e05-320a-463b-85cd-e1e387bb2610\" (UID: \"09eb0e05-320a-463b-85cd-e1e387bb2610\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.213498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs" (OuterVolumeSpecName: "logs") pod "09eb0e05-320a-463b-85cd-e1e387bb2610" (UID: "09eb0e05-320a-463b-85cd-e1e387bb2610"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.218737 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts" (OuterVolumeSpecName: "scripts") pod "09eb0e05-320a-463b-85cd-e1e387bb2610" (UID: "09eb0e05-320a-463b-85cd-e1e387bb2610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.223445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c" (OuterVolumeSpecName: "kube-api-access-7p84c") pod "09eb0e05-320a-463b-85cd-e1e387bb2610" (UID: "09eb0e05-320a-463b-85cd-e1e387bb2610"). InnerVolumeSpecName "kube-api-access-7p84c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.240902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data" (OuterVolumeSpecName: "config-data") pod "09eb0e05-320a-463b-85cd-e1e387bb2610" (UID: "09eb0e05-320a-463b-85cd-e1e387bb2610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.259226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09eb0e05-320a-463b-85cd-e1e387bb2610" (UID: "09eb0e05-320a-463b-85cd-e1e387bb2610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.316003 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09eb0e05-320a-463b-85cd-e1e387bb2610-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.316049 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.316061 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.316144 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09eb0e05-320a-463b-85cd-e1e387bb2610-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.316157 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p84c\" (UniqueName: \"kubernetes.io/projected/09eb0e05-320a-463b-85cd-e1e387bb2610-kube-api-access-7p84c\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.403993 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rtd9r" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.549525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle\") pod \"0a8de8dc-9b45-45b4-88bb-316168633d73\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.549683 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data\") pod \"0a8de8dc-9b45-45b4-88bb-316168633d73\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.549815 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f905c17d-31e8-4e36-a13a-ccc837408c9f" path="/var/lib/kubelet/pods/f905c17d-31e8-4e36-a13a-ccc837408c9f/volumes" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.549784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data\") pod \"0a8de8dc-9b45-45b4-88bb-316168633d73\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.550273 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pctnd\" (UniqueName: \"kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd\") pod \"0a8de8dc-9b45-45b4-88bb-316168633d73\" (UID: \"0a8de8dc-9b45-45b4-88bb-316168633d73\") " Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.555533 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0a8de8dc-9b45-45b4-88bb-316168633d73" (UID: "0a8de8dc-9b45-45b4-88bb-316168633d73"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.555551 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd" (OuterVolumeSpecName: "kube-api-access-pctnd") pod "0a8de8dc-9b45-45b4-88bb-316168633d73" (UID: "0a8de8dc-9b45-45b4-88bb-316168633d73"). InnerVolumeSpecName "kube-api-access-pctnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.578444 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a8de8dc-9b45-45b4-88bb-316168633d73" (UID: "0a8de8dc-9b45-45b4-88bb-316168633d73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.597130 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data" (OuterVolumeSpecName: "config-data") pod "0a8de8dc-9b45-45b4-88bb-316168633d73" (UID: "0a8de8dc-9b45-45b4-88bb-316168633d73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.653840 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.653888 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.653902 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a8de8dc-9b45-45b4-88bb-316168633d73-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.653916 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pctnd\" (UniqueName: \"kubernetes.io/projected/0a8de8dc-9b45-45b4-88bb-316168633d73-kube-api-access-pctnd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.845653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db576bcfc-pcjbk" event={"ID":"b814aa4e-5f04-4919-bfb3-153dd88e6ef8","Type":"ContainerStarted","Data":"939c5ee203340863782c6d13f0ad062ead06316a158863929390df9b73ad30a8"} Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.845968 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db576bcfc-pcjbk" event={"ID":"b814aa4e-5f04-4919-bfb3-153dd88e6ef8","Type":"ContainerStarted","Data":"ea99bdd68bcdc59ad89b7f8db073150f16c74bef7fd74a490692d62dcb26969e"} Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.846033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-db576bcfc-pcjbk" event={"ID":"b814aa4e-5f04-4919-bfb3-153dd88e6ef8","Type":"ContainerStarted","Data":"0302ef3a733d718c373242b52f27226796a9647e55482067b78cd041e6665e1f"} Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.846320 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.846398 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.848798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rtd9r" event={"ID":"0a8de8dc-9b45-45b4-88bb-316168633d73","Type":"ContainerDied","Data":"1b7d320758908962d5798d2088d71b1e9b27e484c71d0a5b6e9ac062bab07abe"} Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.848977 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7d320758908962d5798d2088d71b1e9b27e484c71d0a5b6e9ac062bab07abe" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.848871 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rtd9r" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.851844 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7tnx6" event={"ID":"09eb0e05-320a-463b-85cd-e1e387bb2610","Type":"ContainerDied","Data":"819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2"} Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.851912 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819aac46e696bcd86fe87e0bbd33246ba3cccc4e963833955d1cf46548ae20a2" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.852003 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7tnx6" Feb 18 16:55:00 crc kubenswrapper[4812]: I0218 16:55:00.877350 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-db576bcfc-pcjbk" podStartSLOduration=2.877326074 podStartE2EDuration="2.877326074s" podCreationTimestamp="2026-02-18 16:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:00.867981862 +0000 UTC m=+1521.133592771" watchObservedRunningTime="2026-02-18 16:55:00.877326074 +0000 UTC m=+1521.142937003" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.057937 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fb97c8db4-nflvl"] Feb 18 16:55:01 crc kubenswrapper[4812]: E0218 16:55:01.058369 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" containerName="placement-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.058385 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" containerName="placement-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: E0218 16:55:01.058404 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" containerName="glance-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.058410 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" containerName="glance-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.058572 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" containerName="glance-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.058585 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" containerName="placement-db-sync" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.059538 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.065035 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.065604 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.065743 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xjzsl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.065822 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.065743 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066399 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-scripts\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-internal-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066478 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-combined-ca-bundle\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcnj\" (UniqueName: \"kubernetes.io/projected/46a98f8d-436c-4726-aa29-f838c4f3d216-kube-api-access-4rcnj\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-config-data\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a98f8d-436c-4726-aa29-f838c4f3d216-logs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.066697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-public-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.081609 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb97c8db4-nflvl"] Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-combined-ca-bundle\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcnj\" (UniqueName: \"kubernetes.io/projected/46a98f8d-436c-4726-aa29-f838c4f3d216-kube-api-access-4rcnj\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168737 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-config-data\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168799 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a98f8d-436c-4726-aa29-f838c4f3d216-logs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-public-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-scripts\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.168972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-internal-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.172913 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a98f8d-436c-4726-aa29-f838c4f3d216-logs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.181058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-internal-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.181815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-config-data\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.188057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-combined-ca-bundle\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.193489 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-scripts\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.202739 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46a98f8d-436c-4726-aa29-f838c4f3d216-public-tls-certs\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.206276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcnj\" (UniqueName: \"kubernetes.io/projected/46a98f8d-436c-4726-aa29-f838c4f3d216-kube-api-access-4rcnj\") pod \"placement-fb97c8db4-nflvl\" (UID: \"46a98f8d-436c-4726-aa29-f838c4f3d216\") " pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.380374 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.381846 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.382733 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.397022 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579210 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579505 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579568 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.579617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.681982 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.682030 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.682048 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.682176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.682198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.682269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.683048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.684152 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.684196 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.687405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.691306 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.704020 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl\") pod \"dnsmasq-dns-785d8bcb8c-fgthj\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:01 crc kubenswrapper[4812]: I0218 16:55:01.705932 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.005916 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb97c8db4-nflvl"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.247541 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.271362 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.272912 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.275326 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.275942 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.276256 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttnms" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.299238 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.401912 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwpb\" (UniqueName: \"kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402065 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402111 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402443 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.402516 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwpb\" (UniqueName: \"kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504686 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504707 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.504779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.505698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.506545 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.515197 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.515616 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.517406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.522016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.551016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwpb\" (UniqueName: \"kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.575679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.652917 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.655373 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.661631 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.701349 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.752772 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814035 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814517 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhk88\" (UniqueName: \"kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.814761 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.878157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerStarted","Data":"831930e0276db2995cc8ccd71fa3ecfcacb55030070013d6a774e57882932c6e"} Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.878449 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerStarted","Data":"3ecb9b4a26b5656681bbc9bac35064db77a9826f7f6f0ff4ec59472240fdb7e5"} Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.879652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb97c8db4-nflvl" event={"ID":"46a98f8d-436c-4726-aa29-f838c4f3d216","Type":"ContainerStarted","Data":"cc70a1e3344c582e55f8b4e81016cd20eb32f11f85a3bec7102fcafbd8c08165"} Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.879673 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb97c8db4-nflvl" event={"ID":"46a98f8d-436c-4726-aa29-f838c4f3d216","Type":"ContainerStarted","Data":"f1cee2b37a117c5f5be373f95dc91fb1a8d918d372d87c8d65145d06fb3f5fde"} Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917215 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917333 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhk88\" (UniqueName: \"kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.917473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.920514 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.921172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.922654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.923346 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.925693 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.937436 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.951503 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhk88\" (UniqueName: \"kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.963427 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:02 crc kubenswrapper[4812]: I0218 16:55:02.975340 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.324633 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.413666 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.413727 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.675966 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.901325 4812 generic.go:334] "Generic (PLEG): container finished" podID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerID="831930e0276db2995cc8ccd71fa3ecfcacb55030070013d6a774e57882932c6e" exitCode=0 Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.901590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerDied","Data":"831930e0276db2995cc8ccd71fa3ecfcacb55030070013d6a774e57882932c6e"} Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.906685 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerStarted","Data":"a00a58a6938f9cc101edd8c574fb0ee0a94d671dc267468eedc1f69ccde1ef6e"} Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.915346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerStarted","Data":"9fcd7b5c72bcedca87bdb0fee25fd990f7a3a9749e586fb81b6d8f3b76058d89"} Feb 18 16:55:03 crc kubenswrapper[4812]: I0218 16:55:03.937716 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb97c8db4-nflvl" event={"ID":"46a98f8d-436c-4726-aa29-f838c4f3d216","Type":"ContainerStarted","Data":"1bc526f52e9ee57d455ef7069c6c4415a8328d4251d224c13b600b7731101eec"} Feb 18 16:55:04 crc kubenswrapper[4812]: I0218 16:55:04.949222 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerStarted","Data":"f141a3d1b1a26ff1e01e9f2146913d643acb86133cb9b01f1b4031beb533db32"} Feb 18 16:55:04 crc kubenswrapper[4812]: I0218 16:55:04.949612 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:04 crc kubenswrapper[4812]: I0218 16:55:04.949630 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:04 crc kubenswrapper[4812]: I0218 16:55:04.972062 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fb97c8db4-nflvl" podStartSLOduration=3.972039885 podStartE2EDuration="3.972039885s" podCreationTimestamp="2026-02-18 16:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:04.970202849 +0000 UTC m=+1525.235813778" watchObservedRunningTime="2026-02-18 16:55:04.972039885 +0000 UTC m=+1525.237650794" Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.002963 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.057140 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.109727 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.964213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerStarted","Data":"829741699c5e6a6b61fd0004521e95c7464506eab9e2d5272f5815a27c941f73"} Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.964708 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.971397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerStarted","Data":"9ad8693cd92261e797089f3e015456217a576faac7feb1fb66ef30672e970c4a"} Feb 18 16:55:05 crc kubenswrapper[4812]: I0218 16:55:05.989141 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" podStartSLOduration=4.989123132 podStartE2EDuration="4.989123132s" podCreationTimestamp="2026-02-18 16:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:05.985184154 +0000 UTC m=+1526.250795073" watchObservedRunningTime="2026-02-18 16:55:05.989123132 +0000 UTC m=+1526.254734031" Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.982580 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerStarted","Data":"064cddf3b05f5294713949527e705b6b3413eec05090f091a0ab9e3252974f6d"} Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.982738 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-log" containerID="cri-o://f141a3d1b1a26ff1e01e9f2146913d643acb86133cb9b01f1b4031beb533db32" gracePeriod=30 Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.982771 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-httpd" containerID="cri-o://064cddf3b05f5294713949527e705b6b3413eec05090f091a0ab9e3252974f6d" gracePeriod=30 Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.985219 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerStarted","Data":"cf9694f08007134ea87874366b30a50d8aab5b3272ea1f4a7d9679796be19357"} Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.985256 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-log" containerID="cri-o://9ad8693cd92261e797089f3e015456217a576faac7feb1fb66ef30672e970c4a" gracePeriod=30 Feb 18 16:55:06 crc kubenswrapper[4812]: I0218 16:55:06.985297 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-httpd" containerID="cri-o://cf9694f08007134ea87874366b30a50d8aab5b3272ea1f4a7d9679796be19357" gracePeriod=30 Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.020398 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.020379299 podStartE2EDuration="6.020379299s" podCreationTimestamp="2026-02-18 16:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:07.00591489 +0000 UTC m=+1527.271525809" watchObservedRunningTime="2026-02-18 16:55:07.020379299 +0000 UTC m=+1527.285990208" Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.061842 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.061822908 podStartE2EDuration="6.061822908s" podCreationTimestamp="2026-02-18 16:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:07.058895405 +0000 UTC m=+1527.324506324" watchObservedRunningTime="2026-02-18 16:55:07.061822908 +0000 UTC m=+1527.327433817" Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.997726 4812 generic.go:334] "Generic (PLEG): container finished" podID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerID="064cddf3b05f5294713949527e705b6b3413eec05090f091a0ab9e3252974f6d" exitCode=143 Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.997998 4812 generic.go:334] "Generic (PLEG): container finished" podID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerID="f141a3d1b1a26ff1e01e9f2146913d643acb86133cb9b01f1b4031beb533db32" exitCode=143 Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.997799 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerDied","Data":"064cddf3b05f5294713949527e705b6b3413eec05090f091a0ab9e3252974f6d"} Feb 18 16:55:07 crc kubenswrapper[4812]: I0218 16:55:07.998053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerDied","Data":"f141a3d1b1a26ff1e01e9f2146913d643acb86133cb9b01f1b4031beb533db32"} Feb 18 16:55:08 crc kubenswrapper[4812]: I0218 16:55:08.000791 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8703c02-e7b0-4808-93cf-ec415311e125" containerID="cf9694f08007134ea87874366b30a50d8aab5b3272ea1f4a7d9679796be19357" exitCode=143 Feb 18 16:55:08 crc kubenswrapper[4812]: I0218 16:55:08.000838 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8703c02-e7b0-4808-93cf-ec415311e125" containerID="9ad8693cd92261e797089f3e015456217a576faac7feb1fb66ef30672e970c4a" exitCode=143 Feb 18 16:55:08 crc kubenswrapper[4812]: I0218 16:55:08.000866 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerDied","Data":"cf9694f08007134ea87874366b30a50d8aab5b3272ea1f4a7d9679796be19357"} Feb 18 16:55:08 crc kubenswrapper[4812]: I0218 16:55:08.000898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerDied","Data":"9ad8693cd92261e797089f3e015456217a576faac7feb1fb66ef30672e970c4a"} Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.144811 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfm29" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" probeResult="failure" output=< Feb 18 16:55:09 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:55:09 crc kubenswrapper[4812]: > Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.187075 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.189822 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-db576bcfc-pcjbk" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.309622 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.319888 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.374919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375328 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhk88\" (UniqueName: \"kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375397 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375495 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375532 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375588 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375640 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375749 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375785 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375818 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwpb\" (UniqueName: \"kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375868 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data\") pod \"f8703c02-e7b0-4808-93cf-ec415311e125\" (UID: \"f8703c02-e7b0-4808-93cf-ec415311e125\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.375907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle\") pod \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\" (UID: \"a19ef5f5-6530-4b23-bb7a-1fa263880ee2\") " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.376168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs" (OuterVolumeSpecName: "logs") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.382049 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.382078 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.382229 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8703c02-e7b0-4808-93cf-ec415311e125-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.382188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs" (OuterVolumeSpecName: "logs") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.383278 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts" (OuterVolumeSpecName: "scripts") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.386375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.386388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts" (OuterVolumeSpecName: "scripts") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.386504 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.386549 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88" (OuterVolumeSpecName: "kube-api-access-dhk88") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "kube-api-access-dhk88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.417606 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb" (OuterVolumeSpecName: "kube-api-access-gwwpb") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "kube-api-access-gwwpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.419570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.444525 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data" (OuterVolumeSpecName: "config-data") pod "f8703c02-e7b0-4808-93cf-ec415311e125" (UID: "f8703c02-e7b0-4808-93cf-ec415311e125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.447626 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.478138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data" (OuterVolumeSpecName: "config-data") pod "a19ef5f5-6530-4b23-bb7a-1fa263880ee2" (UID: "a19ef5f5-6530-4b23-bb7a-1fa263880ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485221 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhk88\" (UniqueName: \"kubernetes.io/projected/f8703c02-e7b0-4808-93cf-ec415311e125-kube-api-access-dhk88\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485325 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485403 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485461 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485538 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485596 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485654 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485708 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485763 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwpb\" (UniqueName: \"kubernetes.io/projected/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-kube-api-access-gwwpb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485815 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8703c02-e7b0-4808-93cf-ec415311e125-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.485872 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19ef5f5-6530-4b23-bb7a-1fa263880ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.516859 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.518056 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.587512 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:09 crc kubenswrapper[4812]: I0218 16:55:09.587544 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.018144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22112483-fba0-45a2-90d1-5f35b199a471","Type":"ContainerStarted","Data":"d40768d9a966bb2861898fd90d5fe1cabce5c489e0a1d65f6ddef6ce03d42b8e"} Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.021037 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a19ef5f5-6530-4b23-bb7a-1fa263880ee2","Type":"ContainerDied","Data":"a00a58a6938f9cc101edd8c574fb0ee0a94d671dc267468eedc1f69ccde1ef6e"} Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.021076 4812 scope.go:117] "RemoveContainer" containerID="064cddf3b05f5294713949527e705b6b3413eec05090f091a0ab9e3252974f6d" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.021173 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.024614 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.024661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8703c02-e7b0-4808-93cf-ec415311e125","Type":"ContainerDied","Data":"9fcd7b5c72bcedca87bdb0fee25fd990f7a3a9749e586fb81b6d8f3b76058d89"} Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.051117 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=6.530045258 podStartE2EDuration="47.05108191s" podCreationTimestamp="2026-02-18 16:54:23 +0000 UTC" firstStartedPulling="2026-02-18 16:54:28.678607882 +0000 UTC m=+1488.944218791" lastFinishedPulling="2026-02-18 16:55:09.199644544 +0000 UTC m=+1529.465255443" observedRunningTime="2026-02-18 16:55:10.043848761 +0000 UTC m=+1530.309459670" watchObservedRunningTime="2026-02-18 16:55:10.05108191 +0000 UTC m=+1530.316692809" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.057275 4812 scope.go:117] "RemoveContainer" containerID="f141a3d1b1a26ff1e01e9f2146913d643acb86133cb9b01f1b4031beb533db32" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.074540 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.093183 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.099708 4812 scope.go:117] "RemoveContainer" containerID="cf9694f08007134ea87874366b30a50d8aab5b3272ea1f4a7d9679796be19357" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.115694 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: E0218 16:55:10.116081 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116107 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: E0218 16:55:10.116127 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116133 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: E0218 16:55:10.116154 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116160 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: E0218 16:55:10.116175 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116181 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116383 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116406 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116424 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-log" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.116433 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" containerName="glance-httpd" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.117566 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.129352 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.129733 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.129926 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.129998 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.130173 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ttnms" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.143075 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.162181 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.174437 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.176542 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.179763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.190281 4812 scope.go:117] "RemoveContainer" containerID="9ad8693cd92261e797089f3e015456217a576faac7feb1fb66ef30672e970c4a" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.190593 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197308 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197743 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfdh\" (UniqueName: \"kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197791 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.197964 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.198001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.198025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299517 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfdh\" (UniqueName: \"kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299702 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.299884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.300830 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.301046 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.301129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.306798 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.307015 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.309569 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.322541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfdh\" (UniqueName: \"kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.322683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.342274 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402077 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402192 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402215 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqt4\" (UniqueName: \"kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402345 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.402500 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.480209 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504389 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504463 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504491 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504729 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.504735 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqt4\" (UniqueName: \"kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.505175 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.505261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.505564 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.505940 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.511557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.524532 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.526059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.530125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqt4\" (UniqueName: \"kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.532276 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19ef5f5-6530-4b23-bb7a-1fa263880ee2" path="/var/lib/kubelet/pods/a19ef5f5-6530-4b23-bb7a-1fa263880ee2/volumes" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.532823 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.533061 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8703c02-e7b0-4808-93cf-ec415311e125" path="/var/lib/kubelet/pods/f8703c02-e7b0-4808-93cf-ec415311e125/volumes" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.559940 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:10 crc kubenswrapper[4812]: I0218 16:55:10.829744 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.270511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.305094 4812 scope.go:117] "RemoveContainer" containerID="d6b23ce88fc8fc6574ebc493bceaca2e191909f2dc0ff7071f3edf60847b5bf1" Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.343316 4812 scope.go:117] "RemoveContainer" containerID="9470ec5635c7ef3aa2a060efd502c21b8732a65a892ca59bad903f37821faff1" Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.653318 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.711256 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.799000 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:55:11 crc kubenswrapper[4812]: I0218 16:55:11.799529 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="dnsmasq-dns" containerID="cri-o://fd13e883e659f33527c78b0ba6e202d1084e2b596142a56e05824fbad32c56cf" gracePeriod=10 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.060347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerStarted","Data":"5708e9d421360e7e795657bc57ff08ea32154e082e7a62794939c1b6f3829c05"} Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.078867 4812 generic.go:334] "Generic (PLEG): container finished" podID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerID="fd13e883e659f33527c78b0ba6e202d1084e2b596142a56e05824fbad32c56cf" exitCode=0 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.078972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" event={"ID":"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b","Type":"ContainerDied","Data":"fd13e883e659f33527c78b0ba6e202d1084e2b596142a56e05824fbad32c56cf"} Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.083431 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.083717 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-central-agent" containerID="cri-o://bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c" gracePeriod=30 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.083868 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="proxy-httpd" containerID="cri-o://c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7" gracePeriod=30 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.083920 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="sg-core" containerID="cri-o://215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069" gracePeriod=30 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.083977 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-notification-agent" containerID="cri-o://8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2" gracePeriod=30 Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.088238 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerStarted","Data":"e5647adbdbc52bf3303906505861ec74026fe19ffdb512a29b7a9f86e2003c40"} Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.710863 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.713754 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.713793 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.713828 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.713922 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.714044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.714072 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7mh\" (UniqueName: \"kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh\") pod \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\" (UID: \"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b\") " Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.725492 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh" (OuterVolumeSpecName: "kube-api-access-ms7mh") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "kube-api-access-ms7mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.807403 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.818227 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7mh\" (UniqueName: \"kubernetes.io/projected/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-kube-api-access-ms7mh\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.818261 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.835204 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.835717 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config" (OuterVolumeSpecName: "config") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.835842 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.837662 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" (UID: "1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.925446 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.925491 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.925503 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:12 crc kubenswrapper[4812]: I0218 16:55:12.925521 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115119 4812 generic.go:334] "Generic (PLEG): container finished" podID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerID="c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7" exitCode=0 Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115150 4812 generic.go:334] "Generic (PLEG): container finished" podID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerID="215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069" exitCode=2 Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115159 4812 generic.go:334] "Generic (PLEG): container finished" podID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerID="bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c" exitCode=0 Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115216 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerDied","Data":"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerDied","Data":"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.115310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerDied","Data":"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.118144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerStarted","Data":"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.121305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" event={"ID":"1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b","Type":"ContainerDied","Data":"7ab0d53c1dc854f679a4093b3fabb864f2ab3cca5c4ca4496ba7fcb29e640d92"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.121354 4812 scope.go:117] "RemoveContainer" containerID="fd13e883e659f33527c78b0ba6e202d1084e2b596142a56e05824fbad32c56cf" Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.121489 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wl988" Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.131881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerStarted","Data":"5776db8d56c60ad7586fa7667d233aee586699bfcc9e74b3da97e17f7da8e124"} Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.265401 4812 scope.go:117] "RemoveContainer" containerID="02fa8094d3b5e53f1c4a28c8b812a05acc100a4bc38306f81708ba4c48c646d9" Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.268337 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:55:13 crc kubenswrapper[4812]: I0218 16:55:13.281557 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wl988"] Feb 18 16:55:14 crc kubenswrapper[4812]: I0218 16:55:14.159142 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerStarted","Data":"15901cacc9a320f4eb5b92e5559b336c0c861316811dcd35520989ff1768b00d"} Feb 18 16:55:14 crc kubenswrapper[4812]: I0218 16:55:14.165401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerStarted","Data":"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de"} Feb 18 16:55:14 crc kubenswrapper[4812]: I0218 16:55:14.196951 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.196904969 podStartE2EDuration="4.196904969s" podCreationTimestamp="2026-02-18 16:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:14.179840775 +0000 UTC m=+1534.445451684" watchObservedRunningTime="2026-02-18 16:55:14.196904969 +0000 UTC m=+1534.462515878" Feb 18 16:55:14 crc kubenswrapper[4812]: I0218 16:55:14.216786 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.216755391 podStartE2EDuration="4.216755391s" podCreationTimestamp="2026-02-18 16:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:14.209603704 +0000 UTC m=+1534.475214613" watchObservedRunningTime="2026-02-18 16:55:14.216755391 +0000 UTC m=+1534.482366300" Feb 18 16:55:14 crc kubenswrapper[4812]: I0218 16:55:14.530912 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" path="/var/lib/kubelet/pods/1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b/volumes" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.148077 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.227055 4812 generic.go:334] "Generic (PLEG): container finished" podID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerID="8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2" exitCode=0 Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.227113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerDied","Data":"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2"} Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.227142 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d1389b5-c52f-440e-af95-996ecdc720f0","Type":"ContainerDied","Data":"c4b6674beab123fbc2f3bc9812b31b99e4e246d49cae7bda88f3c2dc3241b5f8"} Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.227161 4812 scope.go:117] "RemoveContainer" containerID="c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.227304 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.234213 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.234843 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" containerName="kube-state-metrics" containerID="cri-o://2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2" gracePeriod=30 Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.267083 4812 scope.go:117] "RemoveContainer" containerID="215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.290502 4812 scope.go:117] "RemoveContainer" containerID="8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.311655 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.311760 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.311789 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.311887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cqsc\" (UniqueName: \"kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.312118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.312149 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.312215 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd\") pod \"0d1389b5-c52f-440e-af95-996ecdc720f0\" (UID: \"0d1389b5-c52f-440e-af95-996ecdc720f0\") " Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.313755 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.314658 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.332322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts" (OuterVolumeSpecName: "scripts") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.332471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc" (OuterVolumeSpecName: "kube-api-access-8cqsc") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "kube-api-access-8cqsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.334646 4812 scope.go:117] "RemoveContainer" containerID="bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.347749 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.415604 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.415643 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.415655 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.415666 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d1389b5-c52f-440e-af95-996ecdc720f0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.415677 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cqsc\" (UniqueName: \"kubernetes.io/projected/0d1389b5-c52f-440e-af95-996ecdc720f0-kube-api-access-8cqsc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.463596 4812 scope.go:117] "RemoveContainer" containerID="c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.466272 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7\": container with ID starting with c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7 not found: ID does not exist" containerID="c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.466384 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7"} err="failed to get container status \"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7\": rpc error: code = NotFound desc = could not find container \"c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7\": container with ID starting with c36053523c13ba16299092f898999258386a04cc3c6d0b536cc21410dcbadaa7 not found: ID does not exist" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.466464 4812 scope.go:117] "RemoveContainer" containerID="215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.470174 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069\": container with ID starting with 215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069 not found: ID does not exist" containerID="215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.470274 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069"} err="failed to get container status \"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069\": rpc error: code = NotFound desc = could not find container \"215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069\": container with ID starting with 215fb11218d0e04a4baceb4a14c105152f7cb60ede24b7cce5ca086c82981069 not found: ID does not exist" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.470362 4812 scope.go:117] "RemoveContainer" containerID="8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.479347 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2\": container with ID starting with 8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2 not found: ID does not exist" containerID="8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.479394 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2"} err="failed to get container status \"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2\": rpc error: code = NotFound desc = could not find container \"8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2\": container with ID starting with 8023457c57675c5d96b61980c834f6a8870bb21dec7107755a99681b6aed9cb2 not found: ID does not exist" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.479423 4812 scope.go:117] "RemoveContainer" containerID="bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.479923 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c\": container with ID starting with bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c not found: ID does not exist" containerID="bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.479971 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c"} err="failed to get container status \"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c\": rpc error: code = NotFound desc = could not find container \"bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c\": container with ID starting with bd5465bf483f439e51fe7e565fbf64d7765249ef47f38ce472d52e2fee99ba0c not found: ID does not exist" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.486929 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.508319 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data" (OuterVolumeSpecName: "config-data") pod "0d1389b5-c52f-440e-af95-996ecdc720f0" (UID: "0d1389b5-c52f-440e-af95-996ecdc720f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.521318 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.521355 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d1389b5-c52f-440e-af95-996ecdc720f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.599980 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.632289 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652152 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652631 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-notification-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652650 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-notification-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652668 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="init" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652674 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="init" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652687 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="proxy-httpd" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652694 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="proxy-httpd" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652707 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-central-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652713 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-central-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652726 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="dnsmasq-dns" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652732 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="dnsmasq-dns" Feb 18 16:55:16 crc kubenswrapper[4812]: E0218 16:55:16.652744 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="sg-core" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652750 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="sg-core" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652916 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="sg-core" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652929 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-central-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652944 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="proxy-httpd" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652963 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" containerName="ceilometer-notification-agent" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.652972 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8dd10e-dec0-4d06-a36d-8ce52c6a5d1b" containerName="dnsmasq-dns" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.654636 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.658662 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.658849 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.664724 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpm4\" (UniqueName: \"kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830505 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830699 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830808 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.830907 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.936454 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.936791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.936849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.936961 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpm4\" (UniqueName: \"kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.937012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.937051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.937133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.938587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.938619 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.941587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.942154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.943966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.951447 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.958269 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpm4\" (UniqueName: \"kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4\") pod \"ceilometer-0\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " pod="openstack/ceilometer-0" Feb 18 16:55:16 crc kubenswrapper[4812]: I0218 16:55:16.987150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.115727 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.242960 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxkm\" (UniqueName: \"kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm\") pod \"edbfaf09-13f2-49f8-8f32-5b149c8c69be\" (UID: \"edbfaf09-13f2-49f8-8f32-5b149c8c69be\") " Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.250088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm" (OuterVolumeSpecName: "kube-api-access-qnxkm") pod "edbfaf09-13f2-49f8-8f32-5b149c8c69be" (UID: "edbfaf09-13f2-49f8-8f32-5b149c8c69be"). InnerVolumeSpecName "kube-api-access-qnxkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.254434 4812 generic.go:334] "Generic (PLEG): container finished" podID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" containerID="2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2" exitCode=2 Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.254479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbfaf09-13f2-49f8-8f32-5b149c8c69be","Type":"ContainerDied","Data":"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2"} Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.254505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbfaf09-13f2-49f8-8f32-5b149c8c69be","Type":"ContainerDied","Data":"3d2fc5aa7a5e7f6b3bf08c13f442e5d5fe0b18dc4864198529d7c51f3c95c39f"} Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.254521 4812 scope.go:117] "RemoveContainer" containerID="2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.254542 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.347417 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnxkm\" (UniqueName: \"kubernetes.io/projected/edbfaf09-13f2-49f8-8f32-5b149c8c69be-kube-api-access-qnxkm\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.381158 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.407255 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.407990 4812 scope.go:117] "RemoveContainer" containerID="2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2" Feb 18 16:55:17 crc kubenswrapper[4812]: E0218 16:55:17.409015 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2\": container with ID starting with 2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2 not found: ID does not exist" containerID="2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.409084 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2"} err="failed to get container status \"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2\": rpc error: code = NotFound desc = could not find container \"2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2\": container with ID starting with 2641b7e25c1fb8941bc0a09e46a84409b89c8b4f80acbe03e53f881d122786a2 not found: ID does not exist" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.417033 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:17 crc kubenswrapper[4812]: E0218 16:55:17.418738 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" containerName="kube-state-metrics" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.418781 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" containerName="kube-state-metrics" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.418974 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" containerName="kube-state-metrics" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.419692 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.424690 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.424934 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.437590 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.450089 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6v74\" (UniqueName: \"kubernetes.io/projected/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-api-access-n6v74\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.450429 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.450811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.451003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: W0218 16:55:17.508417 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6584b8b_5fb9_4406_95f4_63819a93a0fd.slice/crio-b55b5a762a47ac89041a2e38785fdfe7ea97aa3cd2cea5216df649588eabc655 WatchSource:0}: Error finding container b55b5a762a47ac89041a2e38785fdfe7ea97aa3cd2cea5216df649588eabc655: Status 404 returned error can't find the container with id b55b5a762a47ac89041a2e38785fdfe7ea97aa3cd2cea5216df649588eabc655 Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.508540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.553460 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6v74\" (UniqueName: \"kubernetes.io/projected/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-api-access-n6v74\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.553540 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.553603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.553676 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.558506 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.558570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.560348 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.574271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6v74\" (UniqueName: \"kubernetes.io/projected/bb35e4e7-97db-42af-b8c5-0b79550306f2-kube-api-access-n6v74\") pod \"kube-state-metrics-0\" (UID: \"bb35e4e7-97db-42af-b8c5-0b79550306f2\") " pod="openstack/kube-state-metrics-0" Feb 18 16:55:17 crc kubenswrapper[4812]: I0218 16:55:17.761019 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.134139 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.188870 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:55:18 crc kubenswrapper[4812]: W0218 16:55:18.220726 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb35e4e7_97db_42af_b8c5_0b79550306f2.slice/crio-1ee3c0fa0e9d7e1e73c0a9fcd3e833f183c5c72c86d13d5b32b4d0853e7748fd WatchSource:0}: Error finding container 1ee3c0fa0e9d7e1e73c0a9fcd3e833f183c5c72c86d13d5b32b4d0853e7748fd: Status 404 returned error can't find the container with id 1ee3c0fa0e9d7e1e73c0a9fcd3e833f183c5c72c86d13d5b32b4d0853e7748fd Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.223326 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.267350 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb35e4e7-97db-42af-b8c5-0b79550306f2","Type":"ContainerStarted","Data":"1ee3c0fa0e9d7e1e73c0a9fcd3e833f183c5c72c86d13d5b32b4d0853e7748fd"} Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.269697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerStarted","Data":"b55b5a762a47ac89041a2e38785fdfe7ea97aa3cd2cea5216df649588eabc655"} Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.519707 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1389b5-c52f-440e-af95-996ecdc720f0" path="/var/lib/kubelet/pods/0d1389b5-c52f-440e-af95-996ecdc720f0/volumes" Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.520912 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edbfaf09-13f2-49f8-8f32-5b149c8c69be" path="/var/lib/kubelet/pods/edbfaf09-13f2-49f8-8f32-5b149c8c69be/volumes" Feb 18 16:55:18 crc kubenswrapper[4812]: I0218 16:55:18.966755 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:55:19 crc kubenswrapper[4812]: I0218 16:55:19.283079 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jfm29" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" containerID="cri-o://64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb" gracePeriod=2 Feb 18 16:55:19 crc kubenswrapper[4812]: I0218 16:55:19.283273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerStarted","Data":"792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee"} Feb 18 16:55:19 crc kubenswrapper[4812]: I0218 16:55:19.870308 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.004858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content\") pod \"6d5382c0-1a62-4798-ab49-3e57e41ae698\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.004918 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities\") pod \"6d5382c0-1a62-4798-ab49-3e57e41ae698\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.005068 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqflh\" (UniqueName: \"kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh\") pod \"6d5382c0-1a62-4798-ab49-3e57e41ae698\" (UID: \"6d5382c0-1a62-4798-ab49-3e57e41ae698\") " Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.006173 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities" (OuterVolumeSpecName: "utilities") pod "6d5382c0-1a62-4798-ab49-3e57e41ae698" (UID: "6d5382c0-1a62-4798-ab49-3e57e41ae698"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.014414 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh" (OuterVolumeSpecName: "kube-api-access-kqflh") pod "6d5382c0-1a62-4798-ab49-3e57e41ae698" (UID: "6d5382c0-1a62-4798-ab49-3e57e41ae698"). InnerVolumeSpecName "kube-api-access-kqflh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.107371 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.107414 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqflh\" (UniqueName: \"kubernetes.io/projected/6d5382c0-1a62-4798-ab49-3e57e41ae698-kube-api-access-kqflh\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.195755 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.204557 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d5382c0-1a62-4798-ab49-3e57e41ae698" (UID: "6d5382c0-1a62-4798-ab49-3e57e41ae698"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.209336 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5382c0-1a62-4798-ab49-3e57e41ae698-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.293825 4812 generic.go:334] "Generic (PLEG): container finished" podID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerID="64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb" exitCode=0 Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.293892 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfm29" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.293915 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerDied","Data":"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb"} Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.293949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfm29" event={"ID":"6d5382c0-1a62-4798-ab49-3e57e41ae698","Type":"ContainerDied","Data":"9de23ebe7bc41f7da05069ede60f7711b6eb65038deffbcacea54ef69595dfd7"} Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.293971 4812 scope.go:117] "RemoveContainer" containerID="64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.296803 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bb35e4e7-97db-42af-b8c5-0b79550306f2","Type":"ContainerStarted","Data":"c32719702e812bbf7db9b5650c3477c469338014feed3e4ec50f0bd00775e57e"} Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.297285 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.314184 4812 scope.go:117] "RemoveContainer" containerID="eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.323245 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.241159371 podStartE2EDuration="3.32322259s" podCreationTimestamp="2026-02-18 16:55:17 +0000 UTC" firstStartedPulling="2026-02-18 16:55:18.223651673 +0000 UTC m=+1538.489262582" lastFinishedPulling="2026-02-18 16:55:19.305714892 +0000 UTC m=+1539.571325801" observedRunningTime="2026-02-18 16:55:20.314610606 +0000 UTC m=+1540.580221545" watchObservedRunningTime="2026-02-18 16:55:20.32322259 +0000 UTC m=+1540.588833499" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.350811 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.361230 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jfm29"] Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.370578 4812 scope.go:117] "RemoveContainer" containerID="f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.394580 4812 scope.go:117] "RemoveContainer" containerID="64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb" Feb 18 16:55:20 crc kubenswrapper[4812]: E0218 16:55:20.395137 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb\": container with ID starting with 64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb not found: ID does not exist" containerID="64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.395209 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb"} err="failed to get container status \"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb\": rpc error: code = NotFound desc = could not find container \"64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb\": container with ID starting with 64f93f2d1ded79fa54f9c2df34792a7118984fc25d10458ef5087cd7d2317ceb not found: ID does not exist" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.395245 4812 scope.go:117] "RemoveContainer" containerID="eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee" Feb 18 16:55:20 crc kubenswrapper[4812]: E0218 16:55:20.395724 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee\": container with ID starting with eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee not found: ID does not exist" containerID="eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.395770 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee"} err="failed to get container status \"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee\": rpc error: code = NotFound desc = could not find container \"eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee\": container with ID starting with eaa11928a11d7e775c153e84f4893c4e8b4258f40c58162312caa018f5a1ccee not found: ID does not exist" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.395795 4812 scope.go:117] "RemoveContainer" containerID="f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468" Feb 18 16:55:20 crc kubenswrapper[4812]: E0218 16:55:20.396252 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468\": container with ID starting with f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468 not found: ID does not exist" containerID="f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.396270 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468"} err="failed to get container status \"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468\": rpc error: code = NotFound desc = could not find container \"f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468\": container with ID starting with f9f93ff6cbda0717057bbb9dea90dabab7dc04c7be5d7ce693a66dad5bc8e468 not found: ID does not exist" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.480510 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.480696 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.529163 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" path="/var/lib/kubelet/pods/6d5382c0-1a62-4798-ab49-3e57e41ae698/volumes" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.530312 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.530386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.830092 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.830221 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.885759 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:20 crc kubenswrapper[4812]: I0218 16:55:20.899636 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:21 crc kubenswrapper[4812]: I0218 16:55:21.319224 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerStarted","Data":"ee292a4748e326886abc00210c98571bb0797e9aaf272726208aa6436c4058b7"} Feb 18 16:55:21 crc kubenswrapper[4812]: I0218 16:55:21.320975 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 16:55:21 crc kubenswrapper[4812]: I0218 16:55:21.321005 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 16:55:21 crc kubenswrapper[4812]: I0218 16:55:21.321029 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:21 crc kubenswrapper[4812]: I0218 16:55:21.321044 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:22 crc kubenswrapper[4812]: I0218 16:55:22.331585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerStarted","Data":"085f5b574b65fd2fb5995c2eca12c75def08b070a68072dce68f928534b455d7"} Feb 18 16:55:23 crc kubenswrapper[4812]: I0218 16:55:23.339862 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:23 crc kubenswrapper[4812]: I0218 16:55:23.339886 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:23 crc kubenswrapper[4812]: I0218 16:55:23.339925 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:23 crc kubenswrapper[4812]: I0218 16:55:23.339902 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerStarted","Data":"165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d"} Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352544 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352294 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-central-agent" containerID="cri-o://792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee" gracePeriod=30 Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352643 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="proxy-httpd" containerID="cri-o://165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d" gracePeriod=30 Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352724 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-notification-agent" containerID="cri-o://ee292a4748e326886abc00210c98571bb0797e9aaf272726208aa6436c4058b7" gracePeriod=30 Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.352761 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="sg-core" containerID="cri-o://085f5b574b65fd2fb5995c2eca12c75def08b070a68072dce68f928534b455d7" gracePeriod=30 Feb 18 16:55:24 crc kubenswrapper[4812]: I0218 16:55:24.422657 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.955302074 podStartE2EDuration="8.422632457s" podCreationTimestamp="2026-02-18 16:55:16 +0000 UTC" firstStartedPulling="2026-02-18 16:55:17.512038516 +0000 UTC m=+1537.777649425" lastFinishedPulling="2026-02-18 16:55:23.979368899 +0000 UTC m=+1544.244979808" observedRunningTime="2026-02-18 16:55:24.414410003 +0000 UTC m=+1544.680020912" watchObservedRunningTime="2026-02-18 16:55:24.422632457 +0000 UTC m=+1544.688243376" Feb 18 16:55:25 crc kubenswrapper[4812]: I0218 16:55:25.364178 4812 generic.go:334] "Generic (PLEG): container finished" podID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerID="085f5b574b65fd2fb5995c2eca12c75def08b070a68072dce68f928534b455d7" exitCode=2 Feb 18 16:55:25 crc kubenswrapper[4812]: I0218 16:55:25.364214 4812 generic.go:334] "Generic (PLEG): container finished" podID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerID="ee292a4748e326886abc00210c98571bb0797e9aaf272726208aa6436c4058b7" exitCode=0 Feb 18 16:55:25 crc kubenswrapper[4812]: I0218 16:55:25.364237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerDied","Data":"085f5b574b65fd2fb5995c2eca12c75def08b070a68072dce68f928534b455d7"} Feb 18 16:55:25 crc kubenswrapper[4812]: I0218 16:55:25.364287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerDied","Data":"ee292a4748e326886abc00210c98571bb0797e9aaf272726208aa6436c4058b7"} Feb 18 16:55:26 crc kubenswrapper[4812]: I0218 16:55:26.282057 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 16:55:26 crc kubenswrapper[4812]: I0218 16:55:26.282444 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:26 crc kubenswrapper[4812]: I0218 16:55:26.292600 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 16:55:26 crc kubenswrapper[4812]: I0218 16:55:26.295835 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:26 crc kubenswrapper[4812]: I0218 16:55:26.295891 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:26 crc kubenswrapper[4812]: E0218 16:55:26.539245 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6584b8b_5fb9_4406_95f4_63819a93a0fd.slice/crio-792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:55:27 crc kubenswrapper[4812]: I0218 16:55:27.397375 4812 generic.go:334] "Generic (PLEG): container finished" podID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerID="792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee" exitCode=0 Feb 18 16:55:27 crc kubenswrapper[4812]: I0218 16:55:27.397971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerDied","Data":"792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee"} Feb 18 16:55:27 crc kubenswrapper[4812]: I0218 16:55:27.776110 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.413748 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.414840 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.414922 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.416145 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.416220 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" gracePeriod=600 Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.458151 4812 generic.go:334] "Generic (PLEG): container finished" podID="c32f52a7-3dab-42c3-b32d-ae230861ae69" containerID="c7509139e6fa8fa08532972bc94488999ef707df85725a7d81784410cded7af9" exitCode=0 Feb 18 16:55:33 crc kubenswrapper[4812]: I0218 16:55:33.458195 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tzm7v" event={"ID":"c32f52a7-3dab-42c3-b32d-ae230861ae69","Type":"ContainerDied","Data":"c7509139e6fa8fa08532972bc94488999ef707df85725a7d81784410cded7af9"} Feb 18 16:55:33 crc kubenswrapper[4812]: E0218 16:55:33.555503 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.129857 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.469156 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" exitCode=0 Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.469207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c"} Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.469251 4812 scope.go:117] "RemoveContainer" containerID="edeabf47af6006595519aa771b68b984be0be2b46974d76b4b7a1c5b0b579968" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.469863 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:55:34 crc kubenswrapper[4812]: E0218 16:55:34.470130 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.536323 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-65dc9"] Feb 18 16:55:34 crc kubenswrapper[4812]: E0218 16:55:34.536801 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="extract-content" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.536821 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="extract-content" Feb 18 16:55:34 crc kubenswrapper[4812]: E0218 16:55:34.536842 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="extract-utilities" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.536851 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="extract-utilities" Feb 18 16:55:34 crc kubenswrapper[4812]: E0218 16:55:34.536881 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.536890 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.537125 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5382c0-1a62-4798-ab49-3e57e41ae698" containerName="registry-server" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.537871 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.571242 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-65dc9"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.616048 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fb97c8db4-nflvl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.636312 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-28pd7"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.637607 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.652244 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b9a0-account-create-update-q27jl"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.654131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.658415 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.669013 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-28pd7"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.684209 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b9a0-account-create-update-q27jl"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.687084 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.687338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rftfc\" (UniqueName: \"kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.737151 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qnfpn"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.738387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.745488 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qnfpn"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rftfc\" (UniqueName: \"kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798430 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknzb\" (UniqueName: \"kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tgp\" (UniqueName: \"kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq95\" (UniqueName: \"kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.798700 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.805091 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.830847 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-87be-account-create-update-s5bmn"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.836464 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.843908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rftfc\" (UniqueName: \"kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc\") pod \"nova-api-db-create-65dc9\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.844065 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.859739 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-87be-account-create-update-s5bmn"] Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.865714 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.913905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.913992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknzb\" (UniqueName: \"kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914082 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tgp\" (UniqueName: \"kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914171 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdq95\" (UniqueName: \"kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914228 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.914324 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vgd\" (UniqueName: \"kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.915264 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.915813 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.931889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.933861 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tgp\" (UniqueName: \"kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp\") pod \"nova-cell1-db-create-qnfpn\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.934673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknzb\" (UniqueName: \"kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb\") pod \"nova-cell0-db-create-28pd7\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.945154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdq95\" (UniqueName: \"kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95\") pod \"nova-api-b9a0-account-create-update-q27jl\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.953113 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:34 crc kubenswrapper[4812]: I0218 16:55:34.979298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.020046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.020677 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vgd\" (UniqueName: \"kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.021561 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.035624 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-56e6-account-create-update-txtgr"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.036853 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.046476 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.058780 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vgd\" (UniqueName: \"kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd\") pod \"nova-cell0-87be-account-create-update-s5bmn\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.069975 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-56e6-account-create-update-txtgr"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.073427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.122943 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrnx\" (UniqueName: \"kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.125550 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.127345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.130193 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.227364 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle\") pod \"c32f52a7-3dab-42c3-b32d-ae230861ae69\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.227632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data\") pod \"c32f52a7-3dab-42c3-b32d-ae230861ae69\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.227686 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnd5l\" (UniqueName: \"kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l\") pod \"c32f52a7-3dab-42c3-b32d-ae230861ae69\" (UID: \"c32f52a7-3dab-42c3-b32d-ae230861ae69\") " Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.228246 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.229653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsrnx\" (UniqueName: \"kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.232925 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.237498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c32f52a7-3dab-42c3-b32d-ae230861ae69" (UID: "c32f52a7-3dab-42c3-b32d-ae230861ae69"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.249530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l" (OuterVolumeSpecName: "kube-api-access-mnd5l") pod "c32f52a7-3dab-42c3-b32d-ae230861ae69" (UID: "c32f52a7-3dab-42c3-b32d-ae230861ae69"). InnerVolumeSpecName "kube-api-access-mnd5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.274056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsrnx\" (UniqueName: \"kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx\") pod \"nova-cell1-56e6-account-create-update-txtgr\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.310625 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.311051 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-log" containerID="cri-o://5776db8d56c60ad7586fa7667d233aee586699bfcc9e74b3da97e17f7da8e124" gracePeriod=30 Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.311459 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-httpd" containerID="cri-o://15901cacc9a320f4eb5b92e5559b336c0c861316811dcd35520989ff1768b00d" gracePeriod=30 Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.345658 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnd5l\" (UniqueName: \"kubernetes.io/projected/c32f52a7-3dab-42c3-b32d-ae230861ae69-kube-api-access-mnd5l\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.345726 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.361802 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c32f52a7-3dab-42c3-b32d-ae230861ae69" (UID: "c32f52a7-3dab-42c3-b32d-ae230861ae69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.449970 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32f52a7-3dab-42c3-b32d-ae230861ae69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.462491 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.529028 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tzm7v" event={"ID":"c32f52a7-3dab-42c3-b32d-ae230861ae69","Type":"ContainerDied","Data":"adfa50c8dd97dff1e33bbae959b9512757c0537ed725be58b08a839f48768702"} Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.529071 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfa50c8dd97dff1e33bbae959b9512757c0537ed725be58b08a839f48768702" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.529173 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tzm7v" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.634574 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-65dc9"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.834787 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-fc67d6965-tp8p8"] Feb 18 16:55:35 crc kubenswrapper[4812]: E0218 16:55:35.835322 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" containerName="barbican-db-sync" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.835341 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" containerName="barbican-db-sync" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.835542 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" containerName="barbican-db-sync" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.836742 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.842148 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.842385 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lhsv5" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.842519 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.856926 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fc67d6965-tp8p8"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.872553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.872678 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb79s\" (UniqueName: \"kubernetes.io/projected/d0217426-584c-43a5-8ada-d12d12452f63-kube-api-access-sb79s\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.872755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0217426-584c-43a5-8ada-d12d12452f63-logs\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.872795 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data-custom\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.872956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-combined-ca-bundle\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.929223 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54759bc498-rskxp"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.931145 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.935403 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.944322 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54759bc498-rskxp"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.954173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-28pd7"] Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.975610 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.975781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-combined-ca-bundle\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.975857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-logs\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.975951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.976012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf54z\" (UniqueName: \"kubernetes.io/projected/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-kube-api-access-cf54z\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.976077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb79s\" (UniqueName: \"kubernetes.io/projected/d0217426-584c-43a5-8ada-d12d12452f63-kube-api-access-sb79s\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.976158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data-custom\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.976249 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0217426-584c-43a5-8ada-d12d12452f63-logs\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.976884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0217426-584c-43a5-8ada-d12d12452f63-logs\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.977000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data-custom\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:35 crc kubenswrapper[4812]: I0218 16:55:35.977033 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-combined-ca-bundle\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.012045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data-custom\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.015514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-config-data\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.016201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0217426-584c-43a5-8ada-d12d12452f63-combined-ca-bundle\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.023579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb79s\" (UniqueName: \"kubernetes.io/projected/d0217426-584c-43a5-8ada-d12d12452f63-kube-api-access-sb79s\") pod \"barbican-worker-fc67d6965-tp8p8\" (UID: \"d0217426-584c-43a5-8ada-d12d12452f63\") " pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.032202 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.034433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.049151 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.070898 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b9a0-account-create-update-q27jl"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrvt\" (UniqueName: \"kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079425 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data-custom\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-combined-ca-bundle\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079804 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079844 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.079920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.080064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-logs\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.080234 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf54z\" (UniqueName: \"kubernetes.io/projected/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-kube-api-access-cf54z\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.080629 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-logs\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.084852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-combined-ca-bundle\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.088311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.094771 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-config-data-custom\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.102931 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf54z\" (UniqueName: \"kubernetes.io/projected/8fa7f426-d545-42e2-aa86-7b1f3fb6006f-kube-api-access-cf54z\") pod \"barbican-keystone-listener-54759bc498-rskxp\" (UID: \"8fa7f426-d545-42e2-aa86-7b1f3fb6006f\") " pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.123917 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.127021 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.146298 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.161423 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fc67d6965-tp8p8" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.170510 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181512 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181616 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndrvt\" (UniqueName: \"kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181799 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65sd\" (UniqueName: \"kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181916 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.181958 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.182011 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.182066 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.183240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.183414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.183809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.183835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.184178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.200265 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.204863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndrvt\" (UniqueName: \"kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt\") pod \"dnsmasq-dns-586bdc5f9-mjc8n\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.232012 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.275923 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qnfpn"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.283426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.283493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.283531 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.283586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.283680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65sd\" (UniqueName: \"kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.285275 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.288949 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.299443 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-87be-account-create-update-s5bmn"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.309017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.309753 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.310636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65sd\" (UniqueName: \"kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd\") pod \"barbican-api-84d7b57f88-k4mvt\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: W0218 16:55:36.325984 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcda780a7_35ad_48e5_b5fb_f37a6f169769.slice/crio-e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88 WatchSource:0}: Error finding container e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88: Status 404 returned error can't find the container with id e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88 Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.558603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.559890 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-56e6-account-create-update-txtgr"] Feb 18 16:55:36 crc kubenswrapper[4812]: W0218 16:55:36.563904 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod901f24ea_a5b3_4034_a6ee_14b0e5fc5ef4.slice/crio-8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6 WatchSource:0}: Error finding container 8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6: Status 404 returned error can't find the container with id 8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6 Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.612762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" event={"ID":"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4","Type":"ContainerStarted","Data":"8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.617842 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65dc9" event={"ID":"f7065b61-1ff1-499d-8a26-0e5597389444","Type":"ContainerStarted","Data":"f337f4b82249c74aec58fc00579893bc80b47df97ec12e8d93c058270b06145d"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.617883 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65dc9" event={"ID":"f7065b61-1ff1-499d-8a26-0e5597389444","Type":"ContainerStarted","Data":"4e89267b4fe7f20cacb2838f4d442c92ec2d37afb8c21b4405e3e32a3c1fc8a0"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.623900 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" event={"ID":"cda780a7-35ad-48e5-b5fb-f37a6f169769","Type":"ContainerStarted","Data":"ed9ee3f4de92f750609e9cf2b5d1f4f4f36ba3f6d51819f2500aa8436c18e7b0"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.623940 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" event={"ID":"cda780a7-35ad-48e5-b5fb-f37a6f169769","Type":"ContainerStarted","Data":"e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.628573 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-28pd7" event={"ID":"87284460-f935-40ef-b594-411190374f3a","Type":"ContainerStarted","Data":"29d9d5651b3445da5608d0593e9203023aac128afb2786d4aab574f71cf916f7"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.628615 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-28pd7" event={"ID":"87284460-f935-40ef-b594-411190374f3a","Type":"ContainerStarted","Data":"78aa45ccf75109f6caca07eec64a72dc59e2ed1e6c5ccc4a672add9a0e2d3271"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.641236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qnfpn" event={"ID":"410f0c5b-7215-49bb-a9b1-cff11edd203c","Type":"ContainerStarted","Data":"227e5f88206c602ecdc1eea3b5aec8970895b4bce6900ab14efce774e5a036f7"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.643834 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b9a0-account-create-update-q27jl" event={"ID":"48d4c084-1ec7-4443-862a-a0c1087440dc","Type":"ContainerStarted","Data":"a730318c76bab26565d75718023912c223bc33527a42f755435fe477c0d014f9"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.643865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b9a0-account-create-update-q27jl" event={"ID":"48d4c084-1ec7-4443-862a-a0c1087440dc","Type":"ContainerStarted","Data":"eb620c55e9cbc01ae7f196521b41d06b11c4aa31a7f98599eb1e71fa33941a57"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.647663 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-65dc9" podStartSLOduration=2.647645154 podStartE2EDuration="2.647645154s" podCreationTimestamp="2026-02-18 16:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:36.632926119 +0000 UTC m=+1556.898537028" watchObservedRunningTime="2026-02-18 16:55:36.647645154 +0000 UTC m=+1556.913256063" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.662000 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-28pd7" podStartSLOduration=2.66198414 podStartE2EDuration="2.66198414s" podCreationTimestamp="2026-02-18 16:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:36.660010111 +0000 UTC m=+1556.925621020" watchObservedRunningTime="2026-02-18 16:55:36.66198414 +0000 UTC m=+1556.927595049" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.669628 4812 generic.go:334] "Generic (PLEG): container finished" podID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerID="5776db8d56c60ad7586fa7667d233aee586699bfcc9e74b3da97e17f7da8e124" exitCode=143 Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.669910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerDied","Data":"5776db8d56c60ad7586fa7667d233aee586699bfcc9e74b3da97e17f7da8e124"} Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.702704 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" podStartSLOduration=2.70268839 podStartE2EDuration="2.70268839s" podCreationTimestamp="2026-02-18 16:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:36.686491128 +0000 UTC m=+1556.952102037" watchObservedRunningTime="2026-02-18 16:55:36.70268839 +0000 UTC m=+1556.968299299" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.721169 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b9a0-account-create-update-q27jl" podStartSLOduration=2.721144748 podStartE2EDuration="2.721144748s" podCreationTimestamp="2026-02-18 16:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:36.715622491 +0000 UTC m=+1556.981233410" watchObservedRunningTime="2026-02-18 16:55:36.721144748 +0000 UTC m=+1556.986755677" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.802297 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qnfpn" podStartSLOduration=2.802272951 podStartE2EDuration="2.802272951s" podCreationTimestamp="2026-02-18 16:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:36.734619772 +0000 UTC m=+1557.000230681" watchObservedRunningTime="2026-02-18 16:55:36.802272951 +0000 UTC m=+1557.067883860" Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.828611 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fc67d6965-tp8p8"] Feb 18 16:55:36 crc kubenswrapper[4812]: I0218 16:55:36.974729 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54759bc498-rskxp"] Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.051578 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:37 crc kubenswrapper[4812]: E0218 16:55:37.165244 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7065b61_1ff1_499d_8a26_0e5597389444.slice/crio-conmon-f337f4b82249c74aec58fc00579893bc80b47df97ec12e8d93c058270b06145d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 16:55:37 crc kubenswrapper[4812]: W0218 16:55:37.334530 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafae0da4_aba0_420d_928b_8dde8472a40e.slice/crio-becf64d57e8da89f411328e8deba9ed7af8786af5af539681b4de14cb548d639 WatchSource:0}: Error finding container becf64d57e8da89f411328e8deba9ed7af8786af5af539681b4de14cb548d639: Status 404 returned error can't find the container with id becf64d57e8da89f411328e8deba9ed7af8786af5af539681b4de14cb548d639 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.339560 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.680403 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerStarted","Data":"becf64d57e8da89f411328e8deba9ed7af8786af5af539681b4de14cb548d639"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.682535 4812 generic.go:334] "Generic (PLEG): container finished" podID="f7065b61-1ff1-499d-8a26-0e5597389444" containerID="f337f4b82249c74aec58fc00579893bc80b47df97ec12e8d93c058270b06145d" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.682577 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65dc9" event={"ID":"f7065b61-1ff1-499d-8a26-0e5597389444","Type":"ContainerDied","Data":"f337f4b82249c74aec58fc00579893bc80b47df97ec12e8d93c058270b06145d"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.683976 4812 generic.go:334] "Generic (PLEG): container finished" podID="87284460-f935-40ef-b594-411190374f3a" containerID="29d9d5651b3445da5608d0593e9203023aac128afb2786d4aab574f71cf916f7" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.684013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-28pd7" event={"ID":"87284460-f935-40ef-b594-411190374f3a","Type":"ContainerDied","Data":"29d9d5651b3445da5608d0593e9203023aac128afb2786d4aab574f71cf916f7"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.685436 4812 generic.go:334] "Generic (PLEG): container finished" podID="410f0c5b-7215-49bb-a9b1-cff11edd203c" containerID="eba61bef62832a6abfa682b18e1cb4b3612c6255d1c4618b047a3f1bfe62cb30" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.685473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qnfpn" event={"ID":"410f0c5b-7215-49bb-a9b1-cff11edd203c","Type":"ContainerDied","Data":"eba61bef62832a6abfa682b18e1cb4b3612c6255d1c4618b047a3f1bfe62cb30"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.687006 4812 generic.go:334] "Generic (PLEG): container finished" podID="48d4c084-1ec7-4443-862a-a0c1087440dc" containerID="a730318c76bab26565d75718023912c223bc33527a42f755435fe477c0d014f9" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.687047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b9a0-account-create-update-q27jl" event={"ID":"48d4c084-1ec7-4443-862a-a0c1087440dc","Type":"ContainerDied","Data":"a730318c76bab26565d75718023912c223bc33527a42f755435fe477c0d014f9"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.688206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fc67d6965-tp8p8" event={"ID":"d0217426-584c-43a5-8ada-d12d12452f63","Type":"ContainerStarted","Data":"d6027a9bd3ddd74d7980062e73395d70fa238690e8e8b1102cfd73c25f1c33a5"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.689236 4812 generic.go:334] "Generic (PLEG): container finished" podID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerID="486c3f8e107c279b9d67c4e405239ee8e53c01b5bd27201f27add7669d8c6958" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.689275 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" event={"ID":"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77","Type":"ContainerDied","Data":"486c3f8e107c279b9d67c4e405239ee8e53c01b5bd27201f27add7669d8c6958"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.689290 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" event={"ID":"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77","Type":"ContainerStarted","Data":"0e765b0c5aceaea756148330d9d4167293df0d7718b773a93dac15f7403aa3d5"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.715015 4812 generic.go:334] "Generic (PLEG): container finished" podID="901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" containerID="6c895d1876bf404d61fd7a1fd326169a88f6a32dfa786e38a728dcb4a654500f" exitCode=0 Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.715161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" event={"ID":"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4","Type":"ContainerDied","Data":"6c895d1876bf404d61fd7a1fd326169a88f6a32dfa786e38a728dcb4a654500f"} Feb 18 16:55:37 crc kubenswrapper[4812]: I0218 16:55:37.737641 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" event={"ID":"8fa7f426-d545-42e2-aa86-7b1f3fb6006f","Type":"ContainerStarted","Data":"92644a5ea20162f9360a2ff1713581a465a913e421ac1fccc0d35b727dd2abb9"} Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.748339 4812 generic.go:334] "Generic (PLEG): container finished" podID="cda780a7-35ad-48e5-b5fb-f37a6f169769" containerID="ed9ee3f4de92f750609e9cf2b5d1f4f4f36ba3f6d51819f2500aa8436c18e7b0" exitCode=0 Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.748839 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" event={"ID":"cda780a7-35ad-48e5-b5fb-f37a6f169769","Type":"ContainerDied","Data":"ed9ee3f4de92f750609e9cf2b5d1f4f4f36ba3f6d51819f2500aa8436c18e7b0"} Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.754332 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" event={"ID":"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77","Type":"ContainerStarted","Data":"f7e96857d2a4a5812fc1ddb766a2ba0291a99b0e5aaa5499aeb087dfdd65fe55"} Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.754477 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.756699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerStarted","Data":"d00e60313d9d291f91798947d6e8091cd11c4e95acd121b5cc2ae6d5992d459b"} Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.756740 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerStarted","Data":"e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334"} Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.757044 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.757108 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.795552 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" podStartSLOduration=3.795533019 podStartE2EDuration="3.795533019s" podCreationTimestamp="2026-02-18 16:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:38.791759275 +0000 UTC m=+1559.057370184" watchObservedRunningTime="2026-02-18 16:55:38.795533019 +0000 UTC m=+1559.061143928" Feb 18 16:55:38 crc kubenswrapper[4812]: I0218 16:55:38.827760 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84d7b57f88-k4mvt" podStartSLOduration=2.827738048 podStartE2EDuration="2.827738048s" podCreationTimestamp="2026-02-18 16:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:38.811817073 +0000 UTC m=+1559.077427992" watchObservedRunningTime="2026-02-18 16:55:38.827738048 +0000 UTC m=+1559.093348957" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.132256 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.132843 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-log" containerID="cri-o://585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3" gracePeriod=30 Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.133252 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-httpd" containerID="cri-o://389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de" gracePeriod=30 Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.754344 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56594bb5db-7s9w7"] Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.756264 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.760831 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.761745 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.784490 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56594bb5db-7s9w7"] Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.785733 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.785825 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-public-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.785861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-internal-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.785904 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data-custom\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.786003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdf9v\" (UniqueName: \"kubernetes.io/projected/53d8634a-331f-4236-b554-a1a336a4510a-kube-api-access-tdf9v\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.786138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d8634a-331f-4236-b554-a1a336a4510a-logs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.786180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-combined-ca-bundle\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.808340 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.853545 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.855047 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.876675 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.879199 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerID="585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3" exitCode=143 Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.879343 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerDied","Data":"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.889761 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts\") pod \"f7065b61-1ff1-499d-8a26-0e5597389444\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.889854 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rftfc\" (UniqueName: \"kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc\") pod \"f7065b61-1ff1-499d-8a26-0e5597389444\" (UID: \"f7065b61-1ff1-499d-8a26-0e5597389444\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.889980 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts\") pod \"87284460-f935-40ef-b594-411190374f3a\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.890280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdq95\" (UniqueName: \"kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95\") pod \"48d4c084-1ec7-4443-862a-a0c1087440dc\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.890324 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts\") pod \"48d4c084-1ec7-4443-862a-a0c1087440dc\" (UID: \"48d4c084-1ec7-4443-862a-a0c1087440dc\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.890457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts\") pod \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.890631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknzb\" (UniqueName: \"kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb\") pod \"87284460-f935-40ef-b594-411190374f3a\" (UID: \"87284460-f935-40ef-b594-411190374f3a\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.890776 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsrnx\" (UniqueName: \"kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx\") pod \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\" (UID: \"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4\") " Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.893634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87284460-f935-40ef-b594-411190374f3a" (UID: "87284460-f935-40ef-b594-411190374f3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.893885 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" (UID: "901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.894151 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7065b61-1ff1-499d-8a26-0e5597389444" (UID: "f7065b61-1ff1-499d-8a26-0e5597389444"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.894314 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48d4c084-1ec7-4443-862a-a0c1087440dc" (UID: "48d4c084-1ec7-4443-862a-a0c1087440dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.894678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qnfpn" event={"ID":"410f0c5b-7215-49bb-a9b1-cff11edd203c","Type":"ContainerDied","Data":"227e5f88206c602ecdc1eea3b5aec8970895b4bce6900ab14efce774e5a036f7"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.894717 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="227e5f88206c602ecdc1eea3b5aec8970895b4bce6900ab14efce774e5a036f7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.894853 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.897293 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdf9v\" (UniqueName: \"kubernetes.io/projected/53d8634a-331f-4236-b554-a1a336a4510a-kube-api-access-tdf9v\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.898590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d8634a-331f-4236-b554-a1a336a4510a-logs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.898658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-combined-ca-bundle\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.899205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.899778 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-public-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.899829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-internal-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.900814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53d8634a-331f-4236-b554-a1a336a4510a-logs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.902886 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc" (OuterVolumeSpecName: "kube-api-access-rftfc") pod "f7065b61-1ff1-499d-8a26-0e5597389444" (UID: "f7065b61-1ff1-499d-8a26-0e5597389444"). InnerVolumeSpecName "kube-api-access-rftfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.904273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b9a0-account-create-update-q27jl" event={"ID":"48d4c084-1ec7-4443-862a-a0c1087440dc","Type":"ContainerDied","Data":"eb620c55e9cbc01ae7f196521b41d06b11c4aa31a7f98599eb1e71fa33941a57"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.904316 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb620c55e9cbc01ae7f196521b41d06b11c4aa31a7f98599eb1e71fa33941a57" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.904372 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b9a0-account-create-update-q27jl" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.907110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-combined-ca-bundle\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.908460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx" (OuterVolumeSpecName: "kube-api-access-zsrnx") pod "901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" (UID: "901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4"). InnerVolumeSpecName "kube-api-access-zsrnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.912928 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95" (OuterVolumeSpecName: "kube-api-access-bdq95") pod "48d4c084-1ec7-4443-862a-a0c1087440dc" (UID: "48d4c084-1ec7-4443-862a-a0c1087440dc"). InnerVolumeSpecName "kube-api-access-bdq95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.919071 4812 generic.go:334] "Generic (PLEG): container finished" podID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerID="15901cacc9a320f4eb5b92e5559b336c0c861316811dcd35520989ff1768b00d" exitCode=0 Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.919171 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerDied","Data":"15901cacc9a320f4eb5b92e5559b336c0c861316811dcd35520989ff1768b00d"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.919996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb" (OuterVolumeSpecName: "kube-api-access-tknzb") pod "87284460-f935-40ef-b594-411190374f3a" (UID: "87284460-f935-40ef-b594-411190374f3a"). InnerVolumeSpecName "kube-api-access-tknzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920197 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data-custom\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920445 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdq95\" (UniqueName: \"kubernetes.io/projected/48d4c084-1ec7-4443-862a-a0c1087440dc-kube-api-access-bdq95\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920468 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48d4c084-1ec7-4443-862a-a0c1087440dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920477 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920486 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknzb\" (UniqueName: \"kubernetes.io/projected/87284460-f935-40ef-b594-411190374f3a-kube-api-access-tknzb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920494 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsrnx\" (UniqueName: \"kubernetes.io/projected/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4-kube-api-access-zsrnx\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920503 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7065b61-1ff1-499d-8a26-0e5597389444-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920512 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rftfc\" (UniqueName: \"kubernetes.io/projected/f7065b61-1ff1-499d-8a26-0e5597389444-kube-api-access-rftfc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.920521 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87284460-f935-40ef-b594-411190374f3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.925438 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-internal-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.926830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.929880 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" event={"ID":"901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4","Type":"ContainerDied","Data":"8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.930982 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8151029bf7336a4f862bae8cce6eab53f46b549cf41a5b8bff7de567aef4bce6" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.930436 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-56e6-account-create-update-txtgr" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.929949 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-config-data-custom\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.935560 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53d8634a-331f-4236-b554-a1a336a4510a-public-tls-certs\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.937261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdf9v\" (UniqueName: \"kubernetes.io/projected/53d8634a-331f-4236-b554-a1a336a4510a-kube-api-access-tdf9v\") pod \"barbican-api-56594bb5db-7s9w7\" (UID: \"53d8634a-331f-4236-b554-a1a336a4510a\") " pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.944499 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65dc9" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.944916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65dc9" event={"ID":"f7065b61-1ff1-499d-8a26-0e5597389444","Type":"ContainerDied","Data":"4e89267b4fe7f20cacb2838f4d442c92ec2d37afb8c21b4405e3e32a3c1fc8a0"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.944955 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e89267b4fe7f20cacb2838f4d442c92ec2d37afb8c21b4405e3e32a3c1fc8a0" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.948217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-28pd7" event={"ID":"87284460-f935-40ef-b594-411190374f3a","Type":"ContainerDied","Data":"78aa45ccf75109f6caca07eec64a72dc59e2ed1e6c5ccc4a672add9a0e2d3271"} Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.948274 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78aa45ccf75109f6caca07eec64a72dc59e2ed1e6c5ccc4a672add9a0e2d3271" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.948378 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-28pd7" Feb 18 16:55:39 crc kubenswrapper[4812]: I0218 16:55:39.984827 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.021807 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts\") pod \"410f0c5b-7215-49bb-a9b1-cff11edd203c\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.021864 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8tgp\" (UniqueName: \"kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp\") pod \"410f0c5b-7215-49bb-a9b1-cff11edd203c\" (UID: \"410f0c5b-7215-49bb-a9b1-cff11edd203c\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.030144 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "410f0c5b-7215-49bb-a9b1-cff11edd203c" (UID: "410f0c5b-7215-49bb-a9b1-cff11edd203c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.034353 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp" (OuterVolumeSpecName: "kube-api-access-b8tgp") pod "410f0c5b-7215-49bb-a9b1-cff11edd203c" (UID: "410f0c5b-7215-49bb-a9b1-cff11edd203c"). InnerVolumeSpecName "kube-api-access-b8tgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127174 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127214 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127266 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127294 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127350 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjqt4\" (UniqueName: \"kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127445 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs\") pod \"a2b2904d-8334-485a-bf83-8ce746c157f8\" (UID: \"a2b2904d-8334-485a-bf83-8ce746c157f8\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127949 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0c5b-7215-49bb-a9b1-cff11edd203c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.127965 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8tgp\" (UniqueName: \"kubernetes.io/projected/410f0c5b-7215-49bb-a9b1-cff11edd203c-kube-api-access-b8tgp\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.136535 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.148243 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs" (OuterVolumeSpecName: "logs") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.162266 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts" (OuterVolumeSpecName: "scripts") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.166226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.177341 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4" (OuterVolumeSpecName: "kube-api-access-tjqt4") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "kube-api-access-tjqt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.221585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.238342 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.238398 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.238411 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.238424 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2b2904d-8334-485a-bf83-8ce746c157f8-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.238442 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjqt4\" (UniqueName: \"kubernetes.io/projected/a2b2904d-8334-485a-bf83-8ce746c157f8-kube-api-access-tjqt4\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.311245 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.336361 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.341440 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.341461 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.398809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.443204 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.466208 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data" (OuterVolumeSpecName: "config-data") pod "a2b2904d-8334-485a-bf83-8ce746c157f8" (UID: "a2b2904d-8334-485a-bf83-8ce746c157f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.544949 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b2904d-8334-485a-bf83-8ce746c157f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.584223 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.647066 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vgd\" (UniqueName: \"kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd\") pod \"cda780a7-35ad-48e5-b5fb-f37a6f169769\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.647232 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts\") pod \"cda780a7-35ad-48e5-b5fb-f37a6f169769\" (UID: \"cda780a7-35ad-48e5-b5fb-f37a6f169769\") " Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.648308 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cda780a7-35ad-48e5-b5fb-f37a6f169769" (UID: "cda780a7-35ad-48e5-b5fb-f37a6f169769"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.673127 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd" (OuterVolumeSpecName: "kube-api-access-q5vgd") pod "cda780a7-35ad-48e5-b5fb-f37a6f169769" (UID: "cda780a7-35ad-48e5-b5fb-f37a6f169769"). InnerVolumeSpecName "kube-api-access-q5vgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.749447 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vgd\" (UniqueName: \"kubernetes.io/projected/cda780a7-35ad-48e5-b5fb-f37a6f169769-kube-api-access-q5vgd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.749958 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda780a7-35ad-48e5-b5fb-f37a6f169769-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.908947 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56594bb5db-7s9w7"] Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.986656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a2b2904d-8334-485a-bf83-8ce746c157f8","Type":"ContainerDied","Data":"e5647adbdbc52bf3303906505861ec74026fe19ffdb512a29b7a9f86e2003c40"} Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.987001 4812 scope.go:117] "RemoveContainer" containerID="15901cacc9a320f4eb5b92e5559b336c0c861316811dcd35520989ff1768b00d" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.987258 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.999541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fc67d6965-tp8p8" event={"ID":"d0217426-584c-43a5-8ada-d12d12452f63","Type":"ContainerStarted","Data":"32d707aa21f7b678dc91b60b4971ea5dd3618c2f287343f59c6a7305d1979220"} Feb 18 16:55:40 crc kubenswrapper[4812]: I0218 16:55:40.999667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fc67d6965-tp8p8" event={"ID":"d0217426-584c-43a5-8ada-d12d12452f63","Type":"ContainerStarted","Data":"7cd65b76adffd9f6766b9218ce6660942d2c11279e60a55a46bd77115a48c0aa"} Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.012882 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" event={"ID":"8fa7f426-d545-42e2-aa86-7b1f3fb6006f","Type":"ContainerStarted","Data":"05fd419dadf0d9b25e22103dabf16211631ae51c4ff04839bec72ba6bfb3423f"} Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.012924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" event={"ID":"8fa7f426-d545-42e2-aa86-7b1f3fb6006f","Type":"ContainerStarted","Data":"09c9c4afc6306b513552ae31ee276f5705e742abd69610d07a133fae2c6741fc"} Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.030600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56594bb5db-7s9w7" event={"ID":"53d8634a-331f-4236-b554-a1a336a4510a","Type":"ContainerStarted","Data":"2400efc48194a07d88e127d9f766bf3ea4a0b42fa4ee69b030bbb42e526f2dbd"} Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.033192 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qnfpn" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.034079 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.034921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-87be-account-create-update-s5bmn" event={"ID":"cda780a7-35ad-48e5-b5fb-f37a6f169769","Type":"ContainerDied","Data":"e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88"} Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.034967 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e518d56502deeaba37a4a47d77d5c4a50cb2ce86d3f4bffe664fb12a235b3b88" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.039934 4812 scope.go:117] "RemoveContainer" containerID="5776db8d56c60ad7586fa7667d233aee586699bfcc9e74b3da97e17f7da8e124" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.049843 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.073197 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.082814 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-fc67d6965-tp8p8" podStartSLOduration=3.483868846 podStartE2EDuration="6.082789292s" podCreationTimestamp="2026-02-18 16:55:35 +0000 UTC" firstStartedPulling="2026-02-18 16:55:36.938631814 +0000 UTC m=+1557.204242723" lastFinishedPulling="2026-02-18 16:55:39.53755226 +0000 UTC m=+1559.803163169" observedRunningTime="2026-02-18 16:55:41.034637297 +0000 UTC m=+1561.300248206" watchObservedRunningTime="2026-02-18 16:55:41.082789292 +0000 UTC m=+1561.348400201" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125230 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125867 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d4c084-1ec7-4443-862a-a0c1087440dc" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125885 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d4c084-1ec7-4443-862a-a0c1087440dc" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125896 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-httpd" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125903 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-httpd" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125917 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda780a7-35ad-48e5-b5fb-f37a6f169769" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125923 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda780a7-35ad-48e5-b5fb-f37a6f169769" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125938 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-log" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125944 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-log" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125958 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87284460-f935-40ef-b594-411190374f3a" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125964 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="87284460-f935-40ef-b594-411190374f3a" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125972 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7065b61-1ff1-499d-8a26-0e5597389444" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125977 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7065b61-1ff1-499d-8a26-0e5597389444" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.125989 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.125997 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: E0218 16:55:41.126006 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f0c5b-7215-49bb-a9b1-cff11edd203c" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126013 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f0c5b-7215-49bb-a9b1-cff11edd203c" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126202 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda780a7-35ad-48e5-b5fb-f37a6f169769" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126226 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="87284460-f935-40ef-b594-411190374f3a" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126238 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d4c084-1ec7-4443-862a-a0c1087440dc" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126252 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-httpd" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126262 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" containerName="mariadb-account-create-update" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126279 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7065b61-1ff1-499d-8a26-0e5597389444" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126292 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="410f0c5b-7215-49bb-a9b1-cff11edd203c" containerName="mariadb-database-create" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.126300 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" containerName="glance-log" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.127423 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.130922 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.131168 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.134334 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.138017 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54759bc498-rskxp" podStartSLOduration=3.599194588 podStartE2EDuration="6.137998742s" podCreationTimestamp="2026-02-18 16:55:35 +0000 UTC" firstStartedPulling="2026-02-18 16:55:37.003787291 +0000 UTC m=+1557.269398200" lastFinishedPulling="2026-02-18 16:55:39.542591445 +0000 UTC m=+1559.808202354" observedRunningTime="2026-02-18 16:55:41.065655507 +0000 UTC m=+1561.331266426" watchObservedRunningTime="2026-02-18 16:55:41.137998742 +0000 UTC m=+1561.403609651" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176410 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176453 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176493 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-logs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176556 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.176865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.177081 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.177141 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c727j\" (UniqueName: \"kubernetes.io/projected/cedf67dc-05b4-4294-84d6-19c9c649145c-kube-api-access-c727j\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278753 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-logs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278921 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.278994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.279012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.279034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c727j\" (UniqueName: \"kubernetes.io/projected/cedf67dc-05b4-4294-84d6-19c9c649145c-kube-api-access-c727j\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.279370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.279381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cedf67dc-05b4-4294-84d6-19c9c649145c-logs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.279673 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.284073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.284165 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.284226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.284349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cedf67dc-05b4-4294-84d6-19c9c649145c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.297964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c727j\" (UniqueName: \"kubernetes.io/projected/cedf67dc-05b4-4294-84d6-19c9c649145c-kube-api-access-c727j\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.309374 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"cedf67dc-05b4-4294-84d6-19c9c649145c\") " pod="openstack/glance-default-internal-api-0" Feb 18 16:55:41 crc kubenswrapper[4812]: I0218 16:55:41.479697 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.048252 4812 generic.go:334] "Generic (PLEG): container finished" podID="4b87b144-e1c5-4d51-b6f1-6896913188d1" containerID="f16c6f19ff0a675905f40cc8ed610c1f2889f24418838793f7e2cb03a98ddf0f" exitCode=0 Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.048333 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-284h9" event={"ID":"4b87b144-e1c5-4d51-b6f1-6896913188d1","Type":"ContainerDied","Data":"f16c6f19ff0a675905f40cc8ed610c1f2889f24418838793f7e2cb03a98ddf0f"} Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.052545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56594bb5db-7s9w7" event={"ID":"53d8634a-331f-4236-b554-a1a336a4510a","Type":"ContainerStarted","Data":"20120058c8625f73d80e11761ec82ad4d98e6828b7e6a343b668ecae84d9f271"} Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.052595 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.052609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56594bb5db-7s9w7" event={"ID":"53d8634a-331f-4236-b554-a1a336a4510a","Type":"ContainerStarted","Data":"3e69b34efa77ccbf4a48ce9b65623369c512e7b56fa07acbbfab96c4e8a18afe"} Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.053080 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.102815 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56594bb5db-7s9w7" podStartSLOduration=3.102796362 podStartE2EDuration="3.102796362s" podCreationTimestamp="2026-02-18 16:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:42.09988202 +0000 UTC m=+1562.365492939" watchObservedRunningTime="2026-02-18 16:55:42.102796362 +0000 UTC m=+1562.368407271" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.145147 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.276736 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.182:9292/healthcheck\": read tcp 10.217.0.2:58184->10.217.0.182:9292: read: connection reset by peer" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.277003 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.182:9292/healthcheck\": read tcp 10.217.0.2:58176->10.217.0.182:9292: read: connection reset by peer" Feb 18 16:55:42 crc kubenswrapper[4812]: I0218 16:55:42.523554 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b2904d-8334-485a-bf83-8ce746c157f8" path="/var/lib/kubelet/pods/a2b2904d-8334-485a-bf83-8ce746c157f8/volumes" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.034762 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.126633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cedf67dc-05b4-4294-84d6-19c9c649145c","Type":"ContainerStarted","Data":"be70ab6083250730793f840286682242359acc958f4dcec214f4949301692c71"} Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.126682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cedf67dc-05b4-4294-84d6-19c9c649145c","Type":"ContainerStarted","Data":"611d18bb469326f3938a7b55ceceb20ba95cfe809497c33ec8270c0993f30988"} Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.135615 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerID="389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de" exitCode=0 Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.135712 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.135764 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerDied","Data":"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de"} Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.135806 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9f6cf31-d965-40d4-a560-941ffc0dc3eb","Type":"ContainerDied","Data":"5708e9d421360e7e795657bc57ff08ea32154e082e7a62794939c1b6f3829c05"} Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.135832 4812 scope.go:117] "RemoveContainer" containerID="389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137551 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137584 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137679 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgfdh\" (UniqueName: \"kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137724 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.137961 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.138087 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.138236 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs\") pod \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\" (UID: \"e9f6cf31-d965-40d4-a560-941ffc0dc3eb\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.142349 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.142783 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs" (OuterVolumeSpecName: "logs") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.146348 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts" (OuterVolumeSpecName: "scripts") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.146721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.164912 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh" (OuterVolumeSpecName: "kube-api-access-bgfdh") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "kube-api-access-bgfdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.186572 4812 scope.go:117] "RemoveContainer" containerID="585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.207924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.224242 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.244556 4812 scope.go:117] "RemoveContainer" containerID="389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.245957 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.245985 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.245997 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.246009 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgfdh\" (UniqueName: \"kubernetes.io/projected/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-kube-api-access-bgfdh\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.246032 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.246044 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.246054 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: E0218 16:55:43.249530 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de\": container with ID starting with 389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de not found: ID does not exist" containerID="389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.249564 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de"} err="failed to get container status \"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de\": rpc error: code = NotFound desc = could not find container \"389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de\": container with ID starting with 389fe046ec8946c928323ba3f3593cf8ae92ba94b98a8996f8782ee27bf1a4de not found: ID does not exist" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.249586 4812 scope.go:117] "RemoveContainer" containerID="585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3" Feb 18 16:55:43 crc kubenswrapper[4812]: E0218 16:55:43.265351 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3\": container with ID starting with 585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3 not found: ID does not exist" containerID="585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.265416 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3"} err="failed to get container status \"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3\": rpc error: code = NotFound desc = could not find container \"585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3\": container with ID starting with 585cec6dc8861584879e6dc3ca7b01f47860cac0b0cc16ec76980206d30109d3 not found: ID does not exist" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.265481 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data" (OuterVolumeSpecName: "config-data") pod "e9f6cf31-d965-40d4-a560-941ffc0dc3eb" (UID: "e9f6cf31-d965-40d4-a560-941ffc0dc3eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.305118 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.348486 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f6cf31-d965-40d4-a560-941ffc0dc3eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.348533 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.486007 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.569890 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.595037 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:43 crc kubenswrapper[4812]: E0218 16:55:43.595562 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-log" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.595582 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-log" Feb 18 16:55:43 crc kubenswrapper[4812]: E0218 16:55:43.595612 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-httpd" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.595620 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-httpd" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.595824 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-log" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.595851 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" containerName="glance-httpd" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.596890 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.600495 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.600820 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.620995 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.624853 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-284h9" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658234 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658377 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-logs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr64\" (UniqueName: \"kubernetes.io/projected/3fa1f55d-b076-4277-8ba9-c80b987587fb-kube-api-access-rsr64\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.658776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761675 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761774 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761797 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761828 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkqb\" (UniqueName: \"kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761862 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.761925 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle\") pod \"4b87b144-e1c5-4d51-b6f1-6896913188d1\" (UID: \"4b87b144-e1c5-4d51-b6f1-6896913188d1\") " Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762283 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-logs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762340 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762371 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsr64\" (UniqueName: \"kubernetes.io/projected/3fa1f55d-b076-4277-8ba9-c80b987587fb-kube-api-access-rsr64\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762390 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.762527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.763638 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.763714 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fa1f55d-b076-4277-8ba9-c80b987587fb-logs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.768178 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.770211 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.770592 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.772262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.773172 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.776774 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts" (OuterVolumeSpecName: "scripts") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.781457 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsr64\" (UniqueName: \"kubernetes.io/projected/3fa1f55d-b076-4277-8ba9-c80b987587fb-kube-api-access-rsr64\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.788809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb" (OuterVolumeSpecName: "kube-api-access-6nkqb") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "kube-api-access-6nkqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.796379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.808630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa1f55d-b076-4277-8ba9-c80b987587fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.813587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.848227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"3fa1f55d-b076-4277-8ba9-c80b987587fb\") " pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.864487 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.864534 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.864550 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b87b144-e1c5-4d51-b6f1-6896913188d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.864563 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkqb\" (UniqueName: \"kubernetes.io/projected/4b87b144-e1c5-4d51-b6f1-6896913188d1-kube-api-access-6nkqb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.864583 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.867388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data" (OuterVolumeSpecName: "config-data") pod "4b87b144-e1c5-4d51-b6f1-6896913188d1" (UID: "4b87b144-e1c5-4d51-b6f1-6896913188d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.941294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 16:55:43 crc kubenswrapper[4812]: I0218 16:55:43.966442 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b87b144-e1c5-4d51-b6f1-6896913188d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.166211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cedf67dc-05b4-4294-84d6-19c9c649145c","Type":"ContainerStarted","Data":"1b3b7178baf636ac82b05482b2b45e460d7ec9ccbf40940d4e64ee40dcf0e3b1"} Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.169735 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-284h9" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.169830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-284h9" event={"ID":"4b87b144-e1c5-4d51-b6f1-6896913188d1","Type":"ContainerDied","Data":"bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2"} Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.169885 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef4ce66f7796c58738ce19b11c0c8e2736a9de987168ce1a9fd9f2eb5d2b0d2" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.196002 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.195983109 podStartE2EDuration="3.195983109s" podCreationTimestamp="2026-02-18 16:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:44.1944225 +0000 UTC m=+1564.460033409" watchObservedRunningTime="2026-02-18 16:55:44.195983109 +0000 UTC m=+1564.461594018" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.335955 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:55:44 crc kubenswrapper[4812]: E0218 16:55:44.336345 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" containerName="cinder-db-sync" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.336361 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" containerName="cinder-db-sync" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.338956 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" containerName="cinder-db-sync" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.339966 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.343122 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.343467 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7hnct" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.343709 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.346908 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.357152 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378560 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td48k\" (UniqueName: \"kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378693 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.378844 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.481439 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td48k\" (UniqueName: \"kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.481788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.481928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.483122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.483402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.483695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.482155 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.488966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.488995 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.489380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.493438 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.506172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td48k\" (UniqueName: \"kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k\") pod \"cinder-scheduler-0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.545910 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f6cf31-d965-40d4-a560-941ffc0dc3eb" path="/var/lib/kubelet/pods/e9f6cf31-d965-40d4-a560-941ffc0dc3eb/volumes" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.555622 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.556014 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="dnsmasq-dns" containerID="cri-o://f7e96857d2a4a5812fc1ddb766a2ba0291a99b0e5aaa5499aeb087dfdd65fe55" gracePeriod=10 Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.558214 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.589588 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.591122 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.632852 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.688507 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.688935 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.689132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.689254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655b7\" (UniqueName: \"kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.689494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.701436 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.702042 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.702207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.787665 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.791753 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.797008 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.803026 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.803965 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.804087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.804222 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.804313 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.804391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-655b7\" (UniqueName: \"kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.804533 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.805638 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.806274 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.807022 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.807293 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.808219 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.830600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-655b7\" (UniqueName: \"kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7\") pod \"dnsmasq-dns-795f4db4bc-74xll\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l442s\" (UniqueName: \"kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906462 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:44 crc kubenswrapper[4812]: I0218 16:55:44.906661 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.009165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l442s\" (UniqueName: \"kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.009790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.009916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.010117 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.010218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.010358 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.010464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.012163 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.012758 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.020481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.022729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.028477 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.034425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.055065 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l442s\" (UniqueName: \"kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s\") pod \"cinder-api-0\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.075011 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.122778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.201666 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fa1f55d-b076-4277-8ba9-c80b987587fb","Type":"ContainerStarted","Data":"9d6470611be59839aa349f43db6ec38c71b0b7e7199a4fecef2a6ed6c1248108"} Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.241679 4812 generic.go:334] "Generic (PLEG): container finished" podID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerID="f7e96857d2a4a5812fc1ddb766a2ba0291a99b0e5aaa5499aeb087dfdd65fe55" exitCode=0 Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.241939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" event={"ID":"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77","Type":"ContainerDied","Data":"f7e96857d2a4a5812fc1ddb766a2ba0291a99b0e5aaa5499aeb087dfdd65fe55"} Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.287803 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zdg8w"] Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.290760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.298655 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.299551 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.299675 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vczxl" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.319059 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zdg8w"] Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.322305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.322708 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpn5\" (UniqueName: \"kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.322760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.322907 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.425113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.425580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbpn5\" (UniqueName: \"kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.425622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.425783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.439452 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.440241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.443760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.457970 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbpn5\" (UniqueName: \"kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5\") pod \"nova-cell0-conductor-db-sync-zdg8w\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.459263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.557123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.564660 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.629907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.630000 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.630064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.630143 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.630238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndrvt\" (UniqueName: \"kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.630307 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc\") pod \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\" (UID: \"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77\") " Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.665352 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt" (OuterVolumeSpecName: "kube-api-access-ndrvt") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "kube-api-access-ndrvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.728218 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config" (OuterVolumeSpecName: "config") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.733648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.734198 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.734218 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndrvt\" (UniqueName: \"kubernetes.io/projected/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-kube-api-access-ndrvt\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.734600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.741943 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.748431 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" (UID: "411c1de8-5c9c-4e8d-a4c2-1ef61d926f77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.799631 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.836888 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.836918 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.836930 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:45 crc kubenswrapper[4812]: I0218 16:55:45.836938 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.043643 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.249323 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zdg8w"] Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.297646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fa1f55d-b076-4277-8ba9-c80b987587fb","Type":"ContainerStarted","Data":"77bd9bb3b27c1075ca2ae20ab9d7f52bf85843d8a34cd6f32880773fa9449628"} Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.304870 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" event={"ID":"1a32938b-c202-455b-ad5d-0cc3b3f94693","Type":"ContainerStarted","Data":"db258643cad66244b3952021f72cbb299b21f9e06e54fb5db45456b1c9046e2e"} Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.310300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" event={"ID":"411c1de8-5c9c-4e8d-a4c2-1ef61d926f77","Type":"ContainerDied","Data":"0e765b0c5aceaea756148330d9d4167293df0d7718b773a93dac15f7403aa3d5"} Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.310361 4812 scope.go:117] "RemoveContainer" containerID="f7e96857d2a4a5812fc1ddb766a2ba0291a99b0e5aaa5499aeb087dfdd65fe55" Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.310558 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-mjc8n" Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.334502 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerStarted","Data":"3618cf98b6e44745d205475c88144910693a0f49d886054b0164fae8f8f748fe"} Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.346325 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerStarted","Data":"7b9ea93ff9bd90044091c1312f678a571822d48dae992c81d1e9f7efada5bbe3"} Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.422296 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.434373 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-mjc8n"] Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.526341 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" path="/var/lib/kubelet/pods/411c1de8-5c9c-4e8d-a4c2-1ef61d926f77/volumes" Feb 18 16:55:46 crc kubenswrapper[4812]: I0218 16:55:46.794267 4812 scope.go:117] "RemoveContainer" containerID="486c3f8e107c279b9d67c4e405239ee8e53c01b5bd27201f27add7669d8c6958" Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.025252 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.385493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3fa1f55d-b076-4277-8ba9-c80b987587fb","Type":"ContainerStarted","Data":"f5d37edeff48efc2e7fc0bfb534083587853f510e2641269d6ea4541819057ea"} Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.400207 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.400870 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerID="8158b8b2599890f6b8b8547367dd768ea83064a4e6e7d99826e9dcdaf5ace8b3" exitCode=0 Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.400939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" event={"ID":"1a32938b-c202-455b-ad5d-0cc3b3f94693","Type":"ContainerDied","Data":"8158b8b2599890f6b8b8547367dd768ea83064a4e6e7d99826e9dcdaf5ace8b3"} Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.424225 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.4242019599999995 podStartE2EDuration="4.42420196s" podCreationTimestamp="2026-02-18 16:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:47.410746576 +0000 UTC m=+1567.676357485" watchObservedRunningTime="2026-02-18 16:55:47.42420196 +0000 UTC m=+1567.689812869" Feb 18 16:55:47 crc kubenswrapper[4812]: I0218 16:55:47.425490 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" event={"ID":"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99","Type":"ContainerStarted","Data":"6375072a58375aa34a68741b927b8e5487988dda16e89a95815404d713ad9c9e"} Feb 18 16:55:48 crc kubenswrapper[4812]: I0218 16:55:48.528700 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:55:48 crc kubenswrapper[4812]: E0218 16:55:48.529318 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:55:48 crc kubenswrapper[4812]: I0218 16:55:48.587587 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" podStartSLOduration=4.587567717 podStartE2EDuration="4.587567717s" podCreationTimestamp="2026-02-18 16:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:48.582668175 +0000 UTC m=+1568.848279104" watchObservedRunningTime="2026-02-18 16:55:48.587567717 +0000 UTC m=+1568.853178626" Feb 18 16:55:48 crc kubenswrapper[4812]: I0218 16:55:48.588685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:48 crc kubenswrapper[4812]: I0218 16:55:48.588711 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerStarted","Data":"be3993d206fdaa57d763ab9baae8d52dd49332adf8f31b145e77f3a9803670d6"} Feb 18 16:55:48 crc kubenswrapper[4812]: I0218 16:55:48.588727 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" event={"ID":"1a32938b-c202-455b-ad5d-0cc3b3f94693","Type":"ContainerStarted","Data":"cec33189e140e353ce9f3b7be124f21cfe85f714e4853dad789bf44b6d761960"} Feb 18 16:55:49 crc kubenswrapper[4812]: I0218 16:55:49.998077 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.032771 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.599713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerStarted","Data":"b10875015a85e943edf520eb3fdc21443b693a115d539feb58e3f654bd298439"} Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.600035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerStarted","Data":"382c4b3ff8927b988b52321d8e7fe8f625ddb430aa8f7fd75b51228b4f39d3b4"} Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.615766 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api-log" containerID="cri-o://be3993d206fdaa57d763ab9baae8d52dd49332adf8f31b145e77f3a9803670d6" gracePeriod=30 Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.616127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerStarted","Data":"f44243636fa1bd97833760b861191926e381be5f94d007dbc5c0f356dd69195b"} Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.616176 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.616211 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api" containerID="cri-o://f44243636fa1bd97833760b861191926e381be5f94d007dbc5c0f356dd69195b" gracePeriod=30 Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.694602 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.4293899119999995 podStartE2EDuration="6.694576887s" podCreationTimestamp="2026-02-18 16:55:44 +0000 UTC" firstStartedPulling="2026-02-18 16:55:45.472337869 +0000 UTC m=+1565.737948768" lastFinishedPulling="2026-02-18 16:55:47.737524834 +0000 UTC m=+1568.003135743" observedRunningTime="2026-02-18 16:55:50.642952776 +0000 UTC m=+1570.908563685" watchObservedRunningTime="2026-02-18 16:55:50.694576887 +0000 UTC m=+1570.960187796" Feb 18 16:55:50 crc kubenswrapper[4812]: I0218 16:55:50.749241 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.749218713 podStartE2EDuration="6.749218713s" podCreationTimestamp="2026-02-18 16:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:50.668275734 +0000 UTC m=+1570.933886643" watchObservedRunningTime="2026-02-18 16:55:50.749218713 +0000 UTC m=+1571.014829622" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.481392 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.481749 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.535912 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.574241 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.641225 4812 generic.go:334] "Generic (PLEG): container finished" podID="270cc802-3e88-492b-900b-d756be75a305" containerID="f44243636fa1bd97833760b861191926e381be5f94d007dbc5c0f356dd69195b" exitCode=0 Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.641258 4812 generic.go:334] "Generic (PLEG): container finished" podID="270cc802-3e88-492b-900b-d756be75a305" containerID="be3993d206fdaa57d763ab9baae8d52dd49332adf8f31b145e77f3a9803670d6" exitCode=143 Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.642166 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerDied","Data":"f44243636fa1bd97833760b861191926e381be5f94d007dbc5c0f356dd69195b"} Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.642217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerDied","Data":"be3993d206fdaa57d763ab9baae8d52dd49332adf8f31b145e77f3a9803670d6"} Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.642233 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"270cc802-3e88-492b-900b-d756be75a305","Type":"ContainerDied","Data":"7b9ea93ff9bd90044091c1312f678a571822d48dae992c81d1e9f7efada5bbe3"} Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.642247 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9ea93ff9bd90044091c1312f678a571822d48dae992c81d1e9f7efada5bbe3" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.643329 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.643472 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.696371 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.790640 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l442s\" (UniqueName: \"kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.790710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.790784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.790865 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.790865 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.799894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.799988 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.800016 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs\") pod \"270cc802-3e88-492b-900b-d756be75a305\" (UID: \"270cc802-3e88-492b-900b-d756be75a305\") " Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.800748 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs" (OuterVolumeSpecName: "logs") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.800991 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270cc802-3e88-492b-900b-d756be75a305-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.801004 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/270cc802-3e88-492b-900b-d756be75a305-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.827324 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts" (OuterVolumeSpecName: "scripts") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.827575 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s" (OuterVolumeSpecName: "kube-api-access-l442s") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "kube-api-access-l442s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.832485 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.846590 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.875641 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data" (OuterVolumeSpecName: "config-data") pod "270cc802-3e88-492b-900b-d756be75a305" (UID: "270cc802-3e88-492b-900b-d756be75a305"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.903125 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l442s\" (UniqueName: \"kubernetes.io/projected/270cc802-3e88-492b-900b-d756be75a305-kube-api-access-l442s\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.903155 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.903167 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.903177 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:51 crc kubenswrapper[4812]: I0218 16:55:51.903186 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/270cc802-3e88-492b-900b-d756be75a305-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.650820 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.700734 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.716364 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.753392 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:52 crc kubenswrapper[4812]: E0218 16:55:52.753964 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.753985 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api" Feb 18 16:55:52 crc kubenswrapper[4812]: E0218 16:55:52.753998 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api-log" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754005 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api-log" Feb 18 16:55:52 crc kubenswrapper[4812]: E0218 16:55:52.754045 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="init" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754054 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="init" Feb 18 16:55:52 crc kubenswrapper[4812]: E0218 16:55:52.754069 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="dnsmasq-dns" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754076 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="dnsmasq-dns" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754308 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="411c1de8-5c9c-4e8d-a4c2-1ef61d926f77" containerName="dnsmasq-dns" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754328 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api-log" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.754345 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="270cc802-3e88-492b-900b-d756be75a305" containerName="cinder-api" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.755681 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.760767 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.760967 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.761071 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.808173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.886274 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7750602c-99bc-47df-850d-ed581888d80d-logs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7750602c-99bc-47df-850d-ed581888d80d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data-custom\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922445 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-scripts\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922476 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922544 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgwq\" (UniqueName: \"kubernetes.io/projected/7750602c-99bc-47df-850d-ed581888d80d-kube-api-access-sjgwq\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.922697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:52 crc kubenswrapper[4812]: I0218 16:55:52.979432 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56594bb5db-7s9w7" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data-custom\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-scripts\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024875 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024918 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjgwq\" (UniqueName: \"kubernetes.io/projected/7750602c-99bc-47df-850d-ed581888d80d-kube-api-access-sjgwq\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.024981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.025955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.026288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7750602c-99bc-47df-850d-ed581888d80d-logs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.026320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7750602c-99bc-47df-850d-ed581888d80d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.026488 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7750602c-99bc-47df-850d-ed581888d80d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.026791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7750602c-99bc-47df-850d-ed581888d80d-logs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.031926 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-scripts\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.034728 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data-custom\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.040135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-config-data\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.042088 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.043258 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.049917 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7750602c-99bc-47df-850d-ed581888d80d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.073935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjgwq\" (UniqueName: \"kubernetes.io/projected/7750602c-99bc-47df-850d-ed581888d80d-kube-api-access-sjgwq\") pod \"cinder-api-0\" (UID: \"7750602c-99bc-47df-850d-ed581888d80d\") " pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.089464 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.089769 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.089874 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" containerID="cri-o://e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334" gracePeriod=30 Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.089925 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" containerID="cri-o://d00e60313d9d291f91798947d6e8091cd11c4e95acd121b5cc2ae6d5992d459b" gracePeriod=30 Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.666369 4812 generic.go:334] "Generic (PLEG): container finished" podID="afae0da4-aba0-420d-928b-8dde8472a40e" containerID="e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334" exitCode=143 Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.667174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerDied","Data":"e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334"} Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.734347 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.941573 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.941615 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.989167 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 16:55:53 crc kubenswrapper[4812]: I0218 16:55:53.992726 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 16:55:54 crc kubenswrapper[4812]: W0218 16:55:54.445411 4812 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod270cc802_3e88_492b_900b_d756be75a305.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod270cc802_3e88_492b_900b_d756be75a305.slice: no such file or directory Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.559874 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270cc802-3e88-492b-900b-d756be75a305" path="/var/lib/kubelet/pods/270cc802-3e88-492b-900b-d756be75a305/volumes" Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.702668 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.713309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7750602c-99bc-47df-850d-ed581888d80d","Type":"ContainerStarted","Data":"a79a1707c5149794bf7c2ae27011c2e8628134a89b9ced1c1af5b46efff31f19"} Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.713359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7750602c-99bc-47df-850d-ed581888d80d","Type":"ContainerStarted","Data":"71ccedc9fc677b199e6d9151b1bc9540871e04258fa2b2123d47326778b5fe97"} Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.718265 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.199:8080/\": dial tcp 10.217.0.199:8080: connect: connection refused" Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.726032 4812 generic.go:334] "Generic (PLEG): container finished" podID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerID="165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d" exitCode=137 Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.726110 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerDied","Data":"165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d"} Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.726681 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.727247 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 16:55:54 crc kubenswrapper[4812]: E0218 16:55:54.743928 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafae0da4_aba0_420d_928b_8dde8472a40e.slice/crio-e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6584b8b_5fb9_4406_95f4_63819a93a0fd.slice/crio-165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6584b8b_5fb9_4406_95f4_63819a93a0fd.slice/crio-conmon-165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411c1de8_5c9c_4e8d_a4c2_1ef61d926f77.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafae0da4_aba0_420d_928b_8dde8472a40e.slice/crio-conmon-e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411c1de8_5c9c_4e8d_a4c2_1ef61d926f77.slice/crio-0e765b0c5aceaea756148330d9d4167293df0d7718b773a93dac15f7403aa3d5\": RecentStats: unable to find data in memory cache]" Feb 18 16:55:54 crc kubenswrapper[4812]: I0218 16:55:54.935438 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.079738 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095484 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095576 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095621 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095646 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095674 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpm4\" (UniqueName: \"kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.095759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.096017 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd\") pod \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\" (UID: \"c6584b8b-5fb9-4406-95f4-63819a93a0fd\") " Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.096806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.097605 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.112857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts" (OuterVolumeSpecName: "scripts") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.132536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4" (OuterVolumeSpecName: "kube-api-access-tmpm4") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "kube-api-access-tmpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.198611 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.198648 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.198663 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6584b8b-5fb9-4406-95f4-63819a93a0fd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.198676 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpm4\" (UniqueName: \"kubernetes.io/projected/c6584b8b-5fb9-4406-95f4-63819a93a0fd-kube-api-access-tmpm4\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.205280 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.207375 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.207610 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" containerID="cri-o://829741699c5e6a6b61fd0004521e95c7464506eab9e2d5272f5815a27c941f73" gracePeriod=10 Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.301355 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.343225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.404054 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.417992 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data" (OuterVolumeSpecName: "config-data") pod "c6584b8b-5fb9-4406-95f4-63819a93a0fd" (UID: "c6584b8b-5fb9-4406-95f4-63819a93a0fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.505572 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6584b8b-5fb9-4406-95f4-63819a93a0fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.752545 4812 generic.go:334] "Generic (PLEG): container finished" podID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerID="829741699c5e6a6b61fd0004521e95c7464506eab9e2d5272f5815a27c941f73" exitCode=0 Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.752655 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerDied","Data":"829741699c5e6a6b61fd0004521e95c7464506eab9e2d5272f5815a27c941f73"} Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.755925 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.756477 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6584b8b-5fb9-4406-95f4-63819a93a0fd","Type":"ContainerDied","Data":"b55b5a762a47ac89041a2e38785fdfe7ea97aa3cd2cea5216df649588eabc655"} Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.756556 4812 scope.go:117] "RemoveContainer" containerID="165e4e941a9bdd0f25be2f24c0c33a9a87ad9a820a6eb982b789e8163711174d" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.856599 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.870418 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898209 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:55 crc kubenswrapper[4812]: E0218 16:55:55.898714 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="proxy-httpd" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898734 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="proxy-httpd" Feb 18 16:55:55 crc kubenswrapper[4812]: E0218 16:55:55.898753 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="sg-core" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898759 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="sg-core" Feb 18 16:55:55 crc kubenswrapper[4812]: E0218 16:55:55.898777 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-central-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898784 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-central-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: E0218 16:55:55.898813 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-notification-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898820 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-notification-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.898997 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="proxy-httpd" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.899009 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-notification-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.899030 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="ceilometer-central-agent" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.899046 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" containerName="sg-core" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.901020 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.904609 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.904890 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.905008 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 16:55:55 crc kubenswrapper[4812]: I0218 16:55:55.928796 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spj54\" (UniqueName: \"kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049835 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049914 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.049939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.151366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.151642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.151736 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.151851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.151937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.152178 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.152739 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spj54\" (UniqueName: \"kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.152665 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.152393 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.153326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.161555 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.163155 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.163785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.166834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.177809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.180225 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spj54\" (UniqueName: \"kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54\") pod \"ceilometer-0\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.237468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.523377 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6584b8b-5fb9-4406-95f4-63819a93a0fd" path="/var/lib/kubelet/pods/c6584b8b-5fb9-4406-95f4-63819a93a0fd/volumes" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.566646 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": read tcp 10.217.0.2:51744->10.217.0.195:9311: read: connection reset by peer" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.566682 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": read tcp 10.217.0.2:51746->10.217.0.195:9311: read: connection reset by peer" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.772955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7750602c-99bc-47df-850d-ed581888d80d","Type":"ContainerStarted","Data":"ba6894bb3ff19e62a00336379e81bf86983c85a428befe655cfec158e5bf254e"} Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.781695 4812 generic.go:334] "Generic (PLEG): container finished" podID="afae0da4-aba0-420d-928b-8dde8472a40e" containerID="d00e60313d9d291f91798947d6e8091cd11c4e95acd121b5cc2ae6d5992d459b" exitCode=0 Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.781827 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.781837 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:55:56 crc kubenswrapper[4812]: I0218 16:55:56.781997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerDied","Data":"d00e60313d9d291f91798947d6e8091cd11c4e95acd121b5cc2ae6d5992d459b"} Feb 18 16:55:57 crc kubenswrapper[4812]: I0218 16:55:57.792776 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 16:55:57 crc kubenswrapper[4812]: I0218 16:55:57.828568 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.82855045 podStartE2EDuration="5.82855045s" podCreationTimestamp="2026-02-18 16:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:55:57.81000479 +0000 UTC m=+1578.075615749" watchObservedRunningTime="2026-02-18 16:55:57.82855045 +0000 UTC m=+1578.094161359" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.044633 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.044786 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.178007 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.178140 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.186935 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.346857 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.373621 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.487557 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.521067 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:56:00 crc kubenswrapper[4812]: E0218 16:56:00.527171 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.830539 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="cinder-scheduler" containerID="cri-o://382c4b3ff8927b988b52321d8e7fe8f625ddb430aa8f7fd75b51228b4f39d3b4" gracePeriod=30 Feb 18 16:56:00 crc kubenswrapper[4812]: I0218 16:56:00.830652 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="probe" containerID="cri-o://b10875015a85e943edf520eb3fdc21443b693a115d539feb58e3f654bd298439" gracePeriod=30 Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.321079 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.559881 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": dial tcp 10.217.0.195:9311: connect: connection refused" Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.560016 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84d7b57f88-k4mvt" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": dial tcp 10.217.0.195:9311: connect: connection refused" Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.707565 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: i/o timeout" Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.842092 4812 generic.go:334] "Generic (PLEG): container finished" podID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerID="b10875015a85e943edf520eb3fdc21443b693a115d539feb58e3f654bd298439" exitCode=0 Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.842137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerDied","Data":"b10875015a85e943edf520eb3fdc21443b693a115d539feb58e3f654bd298439"} Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.842184 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerDied","Data":"382c4b3ff8927b988b52321d8e7fe8f625ddb430aa8f7fd75b51228b4f39d3b4"} Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.842154 4812 generic.go:334] "Generic (PLEG): container finished" podID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerID="382c4b3ff8927b988b52321d8e7fe8f625ddb430aa8f7fd75b51228b4f39d3b4" exitCode=0 Feb 18 16:56:01 crc kubenswrapper[4812]: I0218 16:56:01.938794 4812 scope.go:117] "RemoveContainer" containerID="085f5b574b65fd2fb5995c2eca12c75def08b070a68072dce68f928534b455d7" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.097979 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.112257 4812 scope.go:117] "RemoveContainer" containerID="ee292a4748e326886abc00210c98571bb0797e9aaf272726208aa6436c4058b7" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.171025 4812 scope.go:117] "RemoveContainer" containerID="792f751e73e8a96018a7a4b38fbe0f668a1eb5cad4284179943f9f6ca303d2ee" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.209894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.210280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.214613 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.214934 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.215457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.216140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config\") pod \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\" (UID: \"8a3320a1-03ca-41c2-852b-b49bba57ca5e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.238449 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl" (OuterVolumeSpecName: "kube-api-access-cqrpl") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "kube-api-access-cqrpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.322017 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.327943 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.334267 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.334308 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqrpl\" (UniqueName: \"kubernetes.io/projected/8a3320a1-03ca-41c2-852b-b49bba57ca5e-kube-api-access-cqrpl\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.334321 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.348967 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.379522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.419831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config" (OuterVolumeSpecName: "config") pod "8a3320a1-03ca-41c2-852b-b49bba57ca5e" (UID: "8a3320a1-03ca-41c2-852b-b49bba57ca5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.434903 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.436173 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.436202 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.436214 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3320a1-03ca-41c2-852b-b49bba57ca5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.685260 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.759597 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.887547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs\") pod \"afae0da4-aba0-420d-928b-8dde8472a40e\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.887614 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.887717 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle\") pod \"afae0da4-aba0-420d-928b-8dde8472a40e\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892432 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892521 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892558 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td48k\" (UniqueName: \"kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k\") pod \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\" (UID: \"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892612 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data\") pod \"afae0da4-aba0-420d-928b-8dde8472a40e\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892701 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b65sd\" (UniqueName: \"kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd\") pod \"afae0da4-aba0-420d-928b-8dde8472a40e\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.892734 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom\") pod \"afae0da4-aba0-420d-928b-8dde8472a40e\" (UID: \"afae0da4-aba0-420d-928b-8dde8472a40e\") " Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.898888 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "afae0da4-aba0-420d-928b-8dde8472a40e" (UID: "afae0da4-aba0-420d-928b-8dde8472a40e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.899423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs" (OuterVolumeSpecName: "logs") pod "afae0da4-aba0-420d-928b-8dde8472a40e" (UID: "afae0da4-aba0-420d-928b-8dde8472a40e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.899472 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.923160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66ecf1c3-50b6-47dc-a17d-7b643a57bfc0","Type":"ContainerDied","Data":"3618cf98b6e44745d205475c88144910693a0f49d886054b0164fae8f8f748fe"} Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.923235 4812 scope.go:117] "RemoveContainer" containerID="b10875015a85e943edf520eb3fdc21443b693a115d539feb58e3f654bd298439" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.923319 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.925526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerStarted","Data":"4b9b462809becd1de3d2ba993e9e1c72b268ddff97c3e4d3ad75c924631a3ba5"} Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.926348 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts" (OuterVolumeSpecName: "scripts") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.963288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.963407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k" (OuterVolumeSpecName: "kube-api-access-td48k") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "kube-api-access-td48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.965452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84d7b57f88-k4mvt" event={"ID":"afae0da4-aba0-420d-928b-8dde8472a40e","Type":"ContainerDied","Data":"becf64d57e8da89f411328e8deba9ed7af8786af5af539681b4de14cb548d639"} Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.965678 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84d7b57f88-k4mvt" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.971416 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd" (OuterVolumeSpecName: "kube-api-access-b65sd") pod "afae0da4-aba0-420d-928b-8dde8472a40e" (UID: "afae0da4-aba0-420d-928b-8dde8472a40e"). InnerVolumeSpecName "kube-api-access-b65sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.979374 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afae0da4-aba0-420d-928b-8dde8472a40e" (UID: "afae0da4-aba0-420d-928b-8dde8472a40e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.991675 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" event={"ID":"8a3320a1-03ca-41c2-852b-b49bba57ca5e","Type":"ContainerDied","Data":"3ecb9b4a26b5656681bbc9bac35064db77a9826f7f6f0ff4ec59472240fdb7e5"} Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.992180 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997190 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b65sd\" (UniqueName: \"kubernetes.io/projected/afae0da4-aba0-420d-928b-8dde8472a40e-kube-api-access-b65sd\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997222 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997232 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afae0da4-aba0-420d-928b-8dde8472a40e-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997240 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997247 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997255 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997263 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:02 crc kubenswrapper[4812]: I0218 16:56:02.997272 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td48k\" (UniqueName: \"kubernetes.io/projected/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-kube-api-access-td48k\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.015247 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data" (OuterVolumeSpecName: "config-data") pod "afae0da4-aba0-420d-928b-8dde8472a40e" (UID: "afae0da4-aba0-420d-928b-8dde8472a40e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.033302 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.070340 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data" (OuterVolumeSpecName: "config-data") pod "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" (UID: "66ecf1c3-50b6-47dc-a17d-7b643a57bfc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.102691 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afae0da4-aba0-420d-928b-8dde8472a40e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.102726 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.102738 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.112595 4812 scope.go:117] "RemoveContainer" containerID="382c4b3ff8927b988b52321d8e7fe8f625ddb430aa8f7fd75b51228b4f39d3b4" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.147088 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.162650 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-fgthj"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.166284 4812 scope.go:117] "RemoveContainer" containerID="d00e60313d9d291f91798947d6e8091cd11c4e95acd121b5cc2ae6d5992d459b" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.242262 4812 scope.go:117] "RemoveContainer" containerID="e01936435116a74b46cf1f9e05fe97727733e461a5989a08ae4cb04cb4488334" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.264642 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.283440 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.285264 4812 scope.go:117] "RemoveContainer" containerID="829741699c5e6a6b61fd0004521e95c7464506eab9e2d5272f5815a27c941f73" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304178 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304787 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304812 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304846 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="cinder-scheduler" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304856 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="cinder-scheduler" Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304874 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304882 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304904 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="init" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304914 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="init" Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304929 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="probe" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304937 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="probe" Feb 18 16:56:03 crc kubenswrapper[4812]: E0218 16:56:03.304960 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.304968 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.305220 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="probe" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.305249 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.305266 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api-log" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.305286 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" containerName="barbican-api" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.305304 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" containerName="cinder-scheduler" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.306824 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.315395 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.343203 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.344668 4812 scope.go:117] "RemoveContainer" containerID="831930e0276db2995cc8ccd71fa3ecfcacb55030070013d6a774e57882932c6e" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.352122 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.361340 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84d7b57f88-k4mvt"] Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.418804 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.419398 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.419430 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.419481 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.419520 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.419541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpz2\" (UniqueName: \"kubernetes.io/projected/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-kube-api-access-7xpz2\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521713 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xpz2\" (UniqueName: \"kubernetes.io/projected/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-kube-api-access-7xpz2\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.521910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.531875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.534797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.536864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.537131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.541932 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xpz2\" (UniqueName: \"kubernetes.io/projected/d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80-kube-api-access-7xpz2\") pod \"cinder-scheduler-0\" (UID: \"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80\") " pod="openstack/cinder-scheduler-0" Feb 18 16:56:03 crc kubenswrapper[4812]: I0218 16:56:03.644026 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.009756 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" event={"ID":"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99","Type":"ContainerStarted","Data":"7f21b1116b40c0468682137921a45b955c952440c4edb296e7781ec67d12af3b"} Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.016365 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerStarted","Data":"3fec4def5b8f66effb46fbbb74fb16c27d8b14c8d5ba7bbf9e7f6c9ea254b419"} Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.039080 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" podStartSLOduration=2.797864741 podStartE2EDuration="19.039054329s" podCreationTimestamp="2026-02-18 16:55:45 +0000 UTC" firstStartedPulling="2026-02-18 16:55:46.296876838 +0000 UTC m=+1566.562487747" lastFinishedPulling="2026-02-18 16:56:02.538066426 +0000 UTC m=+1582.803677335" observedRunningTime="2026-02-18 16:56:04.03101753 +0000 UTC m=+1584.296628459" watchObservedRunningTime="2026-02-18 16:56:04.039054329 +0000 UTC m=+1584.304665238" Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.214715 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 16:56:04 crc kubenswrapper[4812]: W0218 16:56:04.216329 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9e8e68c_f0e0_4d09_a851_0bff2c7f6f80.slice/crio-c9e2bbe3cdabf96d169cc6bebb2299d31d1421bb9385431f1b7fb1e359ecbcfe WatchSource:0}: Error finding container c9e2bbe3cdabf96d169cc6bebb2299d31d1421bb9385431f1b7fb1e359ecbcfe: Status 404 returned error can't find the container with id c9e2bbe3cdabf96d169cc6bebb2299d31d1421bb9385431f1b7fb1e359ecbcfe Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.521533 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ecf1c3-50b6-47dc-a17d-7b643a57bfc0" path="/var/lib/kubelet/pods/66ecf1c3-50b6-47dc-a17d-7b643a57bfc0/volumes" Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.522743 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" path="/var/lib/kubelet/pods/8a3320a1-03ca-41c2-852b-b49bba57ca5e/volumes" Feb 18 16:56:04 crc kubenswrapper[4812]: I0218 16:56:04.523497 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afae0da4-aba0-420d-928b-8dde8472a40e" path="/var/lib/kubelet/pods/afae0da4-aba0-420d-928b-8dde8472a40e/volumes" Feb 18 16:56:05 crc kubenswrapper[4812]: I0218 16:56:05.060994 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80","Type":"ContainerStarted","Data":"c9e2bbe3cdabf96d169cc6bebb2299d31d1421bb9385431f1b7fb1e359ecbcfe"} Feb 18 16:56:05 crc kubenswrapper[4812]: I0218 16:56:05.076020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerStarted","Data":"ed1d300ecfdb742f910327c2337cd95862ff9830014c56eb5f293c2415e52a5a"} Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.087865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80","Type":"ContainerStarted","Data":"47fe113178bf618fe62b6c440c6bb6de00b6a107f1b17ba6e5b82cc46a49163f"} Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.088257 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80","Type":"ContainerStarted","Data":"87e70113e4d0beff659fde83fe859bed9114bb6fecd275f96c09a221722f8bf7"} Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.091193 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerStarted","Data":"ecd5b5eac4d3a794842001d8943645b25de3690c5b47a2d8a5a602b3cc7425cb"} Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.114582 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.114552869 podStartE2EDuration="3.114552869s" podCreationTimestamp="2026-02-18 16:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:06.108895548 +0000 UTC m=+1586.374506467" watchObservedRunningTime="2026-02-18 16:56:06.114552869 +0000 UTC m=+1586.380163778" Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.407972 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 16:56:06 crc kubenswrapper[4812]: I0218 16:56:06.708615 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-fgthj" podUID="8a3320a1-03ca-41c2-852b-b49bba57ca5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: i/o timeout" Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.114798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerStarted","Data":"804f0246e81288d4d1ab0e2b9c77ccfdd323165d5194a5cb441abe420be8a7d6"} Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.115461 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.114931 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-central-agent" containerID="cri-o://3fec4def5b8f66effb46fbbb74fb16c27d8b14c8d5ba7bbf9e7f6c9ea254b419" gracePeriod=30 Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.115007 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="sg-core" containerID="cri-o://ecd5b5eac4d3a794842001d8943645b25de3690c5b47a2d8a5a602b3cc7425cb" gracePeriod=30 Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.115007 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="proxy-httpd" containerID="cri-o://804f0246e81288d4d1ab0e2b9c77ccfdd323165d5194a5cb441abe420be8a7d6" gracePeriod=30 Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.115061 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-notification-agent" containerID="cri-o://ed1d300ecfdb742f910327c2337cd95862ff9830014c56eb5f293c2415e52a5a" gracePeriod=30 Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.150843 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.567495547 podStartE2EDuration="13.150817603s" podCreationTimestamp="2026-02-18 16:55:55 +0000 UTC" firstStartedPulling="2026-02-18 16:56:02.47093868 +0000 UTC m=+1582.736549589" lastFinishedPulling="2026-02-18 16:56:07.054260736 +0000 UTC m=+1587.319871645" observedRunningTime="2026-02-18 16:56:08.143301687 +0000 UTC m=+1588.408912596" watchObservedRunningTime="2026-02-18 16:56:08.150817603 +0000 UTC m=+1588.416428522" Feb 18 16:56:08 crc kubenswrapper[4812]: I0218 16:56:08.645067 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 16:56:09 crc kubenswrapper[4812]: I0218 16:56:09.125685 4812 generic.go:334] "Generic (PLEG): container finished" podID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerID="ecd5b5eac4d3a794842001d8943645b25de3690c5b47a2d8a5a602b3cc7425cb" exitCode=2 Feb 18 16:56:09 crc kubenswrapper[4812]: I0218 16:56:09.125926 4812 generic.go:334] "Generic (PLEG): container finished" podID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerID="ed1d300ecfdb742f910327c2337cd95862ff9830014c56eb5f293c2415e52a5a" exitCode=0 Feb 18 16:56:09 crc kubenswrapper[4812]: I0218 16:56:09.125714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerDied","Data":"ecd5b5eac4d3a794842001d8943645b25de3690c5b47a2d8a5a602b3cc7425cb"} Feb 18 16:56:09 crc kubenswrapper[4812]: I0218 16:56:09.125968 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerDied","Data":"ed1d300ecfdb742f910327c2337cd95862ff9830014c56eb5f293c2415e52a5a"} Feb 18 16:56:11 crc kubenswrapper[4812]: I0218 16:56:11.534888 4812 scope.go:117] "RemoveContainer" containerID="29714f3b627c994c2c22ba9d58fea0f2b3c7af25998354c25d01b5996ebf1046" Feb 18 16:56:11 crc kubenswrapper[4812]: I0218 16:56:11.569702 4812 scope.go:117] "RemoveContainer" containerID="269ff17caf7364a796dce14db815495b229998d6a0681347bac8607dd4427df9" Feb 18 16:56:11 crc kubenswrapper[4812]: I0218 16:56:11.623335 4812 scope.go:117] "RemoveContainer" containerID="5fd342793938ba57b2f76105d25b2eafde3b46d68ca531bb31e73a4b36dd2e39" Feb 18 16:56:12 crc kubenswrapper[4812]: I0218 16:56:12.508133 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:56:12 crc kubenswrapper[4812]: E0218 16:56:12.508905 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:56:13 crc kubenswrapper[4812]: I0218 16:56:13.882264 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 16:56:18 crc kubenswrapper[4812]: I0218 16:56:18.226953 4812 generic.go:334] "Generic (PLEG): container finished" podID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerID="3fec4def5b8f66effb46fbbb74fb16c27d8b14c8d5ba7bbf9e7f6c9ea254b419" exitCode=0 Feb 18 16:56:18 crc kubenswrapper[4812]: I0218 16:56:18.227031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerDied","Data":"3fec4def5b8f66effb46fbbb74fb16c27d8b14c8d5ba7bbf9e7f6c9ea254b419"} Feb 18 16:56:26 crc kubenswrapper[4812]: I0218 16:56:26.253139 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 16:56:27 crc kubenswrapper[4812]: I0218 16:56:27.320064 4812 generic.go:334] "Generic (PLEG): container finished" podID="432ecdb1-393e-4454-a386-3134c792b4cc" containerID="900ac1e7823b61cb43d8b708767522ffe936d94978851eab7a706aeb5300b2d6" exitCode=0 Feb 18 16:56:27 crc kubenswrapper[4812]: I0218 16:56:27.320179 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tv8tp" event={"ID":"432ecdb1-393e-4454-a386-3134c792b4cc","Type":"ContainerDied","Data":"900ac1e7823b61cb43d8b708767522ffe936d94978851eab7a706aeb5300b2d6"} Feb 18 16:56:27 crc kubenswrapper[4812]: I0218 16:56:27.508113 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:56:27 crc kubenswrapper[4812]: E0218 16:56:27.508398 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.748305 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.862439 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjqq6\" (UniqueName: \"kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6\") pod \"432ecdb1-393e-4454-a386-3134c792b4cc\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.862504 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config\") pod \"432ecdb1-393e-4454-a386-3134c792b4cc\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.862638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle\") pod \"432ecdb1-393e-4454-a386-3134c792b4cc\" (UID: \"432ecdb1-393e-4454-a386-3134c792b4cc\") " Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.875337 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6" (OuterVolumeSpecName: "kube-api-access-cjqq6") pod "432ecdb1-393e-4454-a386-3134c792b4cc" (UID: "432ecdb1-393e-4454-a386-3134c792b4cc"). InnerVolumeSpecName "kube-api-access-cjqq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.891053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "432ecdb1-393e-4454-a386-3134c792b4cc" (UID: "432ecdb1-393e-4454-a386-3134c792b4cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.893583 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config" (OuterVolumeSpecName: "config") pod "432ecdb1-393e-4454-a386-3134c792b4cc" (UID: "432ecdb1-393e-4454-a386-3134c792b4cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.964967 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjqq6\" (UniqueName: \"kubernetes.io/projected/432ecdb1-393e-4454-a386-3134c792b4cc-kube-api-access-cjqq6\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.965000 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:28 crc kubenswrapper[4812]: I0218 16:56:28.965012 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432ecdb1-393e-4454-a386-3134c792b4cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.339176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tv8tp" event={"ID":"432ecdb1-393e-4454-a386-3134c792b4cc","Type":"ContainerDied","Data":"d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2"} Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.339221 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6db55b6e10a59bd7013193194087e93704b5d621ee0f62750975226b0f32de2" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.339220 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tv8tp" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.640895 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:56:29 crc kubenswrapper[4812]: E0218 16:56:29.641450 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432ecdb1-393e-4454-a386-3134c792b4cc" containerName="neutron-db-sync" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.641474 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="432ecdb1-393e-4454-a386-3134c792b4cc" containerName="neutron-db-sync" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.641728 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="432ecdb1-393e-4454-a386-3134c792b4cc" containerName="neutron-db-sync" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.642886 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.672311 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.749845 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.752934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.763008 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-www47" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.763283 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.763483 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.763669 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.779781 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.779878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwph\" (UniqueName: \"kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.779909 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.779937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.780021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.780085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.785936 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwph\" (UniqueName: \"kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883574 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883646 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883809 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpkm\" (UniqueName: \"kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.883840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.885171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.885837 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.886528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.888383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.888844 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.903590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwph\" (UniqueName: \"kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph\") pod \"dnsmasq-dns-5c9776ccc5-pbq7m\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.960156 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.985756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.985899 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.985954 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.985986 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpkm\" (UniqueName: \"kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.986019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.990511 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.990536 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.991053 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:29 crc kubenswrapper[4812]: I0218 16:56:29.991279 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:30 crc kubenswrapper[4812]: I0218 16:56:30.012053 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpkm\" (UniqueName: \"kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm\") pod \"neutron-5f99566fb-848lv\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:30 crc kubenswrapper[4812]: I0218 16:56:30.071781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:30 crc kubenswrapper[4812]: W0218 16:56:30.556517 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab93384b_6944_4f1a_a0b4_c2f1cb0fc0a9.slice/crio-50f4235965213150044e32f27b11bfe0515dff6ed5b435f5c93a4f7e0df0e365 WatchSource:0}: Error finding container 50f4235965213150044e32f27b11bfe0515dff6ed5b435f5c93a4f7e0df0e365: Status 404 returned error can't find the container with id 50f4235965213150044e32f27b11bfe0515dff6ed5b435f5c93a4f7e0df0e365 Feb 18 16:56:30 crc kubenswrapper[4812]: I0218 16:56:30.561322 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:56:30 crc kubenswrapper[4812]: W0218 16:56:30.804024 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd83fe8_1dc8_45ab_8170_db3766c208a4.slice/crio-0b6bdae0bd1feec6785a37a14a623108ee3e767355407d8d25f2653890b5a22d WatchSource:0}: Error finding container 0b6bdae0bd1feec6785a37a14a623108ee3e767355407d8d25f2653890b5a22d: Status 404 returned error can't find the container with id 0b6bdae0bd1feec6785a37a14a623108ee3e767355407d8d25f2653890b5a22d Feb 18 16:56:30 crc kubenswrapper[4812]: I0218 16:56:30.805566 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.359634 4812 generic.go:334] "Generic (PLEG): container finished" podID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerID="e5eab0d8af9eabc40b8dfe87477a9fb26b2bcb988c3ece93978c2df291fabb24" exitCode=0 Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.359745 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" event={"ID":"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9","Type":"ContainerDied","Data":"e5eab0d8af9eabc40b8dfe87477a9fb26b2bcb988c3ece93978c2df291fabb24"} Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.360029 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" event={"ID":"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9","Type":"ContainerStarted","Data":"50f4235965213150044e32f27b11bfe0515dff6ed5b435f5c93a4f7e0df0e365"} Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.361971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerStarted","Data":"864ccb30094be64435dcd975f5d783e2773acbf61e12a2c263e0a5cace141ce3"} Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.362023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerStarted","Data":"0b6bdae0bd1feec6785a37a14a623108ee3e767355407d8d25f2653890b5a22d"} Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.375714 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:31 crc kubenswrapper[4812]: I0218 16:56:31.375966 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="15072f74-894a-40ee-9609-d58e29a27de8" containerName="watcher-decision-engine" containerID="cri-o://aed82da0ce27d1d06780ffdb8739312c7f876f9006f674bf15d457529bf8b160" gracePeriod=30 Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.075817 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c7874df7-hld7g"] Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.085186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.087914 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.089288 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.119560 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7874df7-hld7g"] Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248801 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmz9\" (UniqueName: \"kubernetes.io/projected/fe5f38f6-ccbc-4355-b83e-c7b31825654c-kube-api-access-jpmz9\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248863 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-combined-ca-bundle\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248893 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-ovndb-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248922 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248960 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-public-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.248984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-httpd-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.249013 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-internal-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.350810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-httpd-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.350911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-internal-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.351226 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpmz9\" (UniqueName: \"kubernetes.io/projected/fe5f38f6-ccbc-4355-b83e-c7b31825654c-kube-api-access-jpmz9\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.351306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-combined-ca-bundle\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.351374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-ovndb-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.351431 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.351570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-public-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.356027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-internal-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.356057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-httpd-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.356151 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-config\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.356884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-ovndb-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.364060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-combined-ca-bundle\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.368903 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe5f38f6-ccbc-4355-b83e-c7b31825654c-public-tls-certs\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.369668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpmz9\" (UniqueName: \"kubernetes.io/projected/fe5f38f6-ccbc-4355-b83e-c7b31825654c-kube-api-access-jpmz9\") pod \"neutron-7c7874df7-hld7g\" (UID: \"fe5f38f6-ccbc-4355-b83e-c7b31825654c\") " pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.373922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" event={"ID":"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9","Type":"ContainerStarted","Data":"cecbd2b58692b82df5a1658b974b145481bb34b37cae24daf1212ae34cf1de84"} Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.375115 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.377667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerStarted","Data":"a145a54ab5c6667dc546371899fe398054c70ec13ffe71a296ed3d5868ccbdda"} Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.378300 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.397473 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" podStartSLOduration=3.397455241 podStartE2EDuration="3.397455241s" podCreationTimestamp="2026-02-18 16:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:32.391412871 +0000 UTC m=+1612.657023780" watchObservedRunningTime="2026-02-18 16:56:32.397455241 +0000 UTC m=+1612.663066150" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.408808 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.428899 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f99566fb-848lv" podStartSLOduration=3.42887325 podStartE2EDuration="3.42887325s" podCreationTimestamp="2026-02-18 16:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:32.422278976 +0000 UTC m=+1612.687889905" watchObservedRunningTime="2026-02-18 16:56:32.42887325 +0000 UTC m=+1612.694484159" Feb 18 16:56:32 crc kubenswrapper[4812]: I0218 16:56:32.782985 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7874df7-hld7g"] Feb 18 16:56:33 crc kubenswrapper[4812]: I0218 16:56:33.389402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7874df7-hld7g" event={"ID":"fe5f38f6-ccbc-4355-b83e-c7b31825654c","Type":"ContainerStarted","Data":"5a10d6eca6dece7e9e2b39ac965aaaae9d364e0f70bafce41edd704f3aa25fbd"} Feb 18 16:56:33 crc kubenswrapper[4812]: I0218 16:56:33.389736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7874df7-hld7g" event={"ID":"fe5f38f6-ccbc-4355-b83e-c7b31825654c","Type":"ContainerStarted","Data":"b179e65d11fe96145ffc01a4e4815ba8315422e477c4d320d05c1dafbfa5b6ed"} Feb 18 16:56:34 crc kubenswrapper[4812]: I0218 16:56:34.402657 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7874df7-hld7g" event={"ID":"fe5f38f6-ccbc-4355-b83e-c7b31825654c","Type":"ContainerStarted","Data":"e464356f1085f85c4eb7a0222bad0e2378a60bec82ffedbff1ea4bf661a8ffef"} Feb 18 16:56:34 crc kubenswrapper[4812]: I0218 16:56:34.403080 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:56:34 crc kubenswrapper[4812]: I0218 16:56:34.421484 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c7874df7-hld7g" podStartSLOduration=2.421464231 podStartE2EDuration="2.421464231s" podCreationTimestamp="2026-02-18 16:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:34.416686613 +0000 UTC m=+1614.682297532" watchObservedRunningTime="2026-02-18 16:56:34.421464231 +0000 UTC m=+1614.687075150" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.438545 4812 generic.go:334] "Generic (PLEG): container finished" podID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerID="804f0246e81288d4d1ab0e2b9c77ccfdd323165d5194a5cb441abe420be8a7d6" exitCode=137 Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.439121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerDied","Data":"804f0246e81288d4d1ab0e2b9c77ccfdd323165d5194a5cb441abe420be8a7d6"} Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.762549 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915626 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spj54\" (UniqueName: \"kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915700 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915741 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.915851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd\") pod \"758e3147-bf40-4e81-b1b0-12fe9711e04d\" (UID: \"758e3147-bf40-4e81-b1b0-12fe9711e04d\") " Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.917269 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.917361 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.923653 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54" (OuterVolumeSpecName: "kube-api-access-spj54") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "kube-api-access-spj54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.925881 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts" (OuterVolumeSpecName: "scripts") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.971142 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:38 crc kubenswrapper[4812]: I0218 16:56:38.999808 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.012975 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018654 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018691 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018702 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spj54\" (UniqueName: \"kubernetes.io/projected/758e3147-bf40-4e81-b1b0-12fe9711e04d-kube-api-access-spj54\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018718 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018730 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018742 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.018753 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/758e3147-bf40-4e81-b1b0-12fe9711e04d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.045275 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data" (OuterVolumeSpecName: "config-data") pod "758e3147-bf40-4e81-b1b0-12fe9711e04d" (UID: "758e3147-bf40-4e81-b1b0-12fe9711e04d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.120513 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758e3147-bf40-4e81-b1b0-12fe9711e04d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.450135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"758e3147-bf40-4e81-b1b0-12fe9711e04d","Type":"ContainerDied","Data":"4b9b462809becd1de3d2ba993e9e1c72b268ddff97c3e4d3ad75c924631a3ba5"} Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.450186 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.450189 4812 scope.go:117] "RemoveContainer" containerID="804f0246e81288d4d1ab0e2b9c77ccfdd323165d5194a5cb441abe420be8a7d6" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.497252 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.504962 4812 scope.go:117] "RemoveContainer" containerID="ecd5b5eac4d3a794842001d8943645b25de3690c5b47a2d8a5a602b3cc7425cb" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.512665 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.523531 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:39 crc kubenswrapper[4812]: E0218 16:56:39.524127 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-notification-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524150 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-notification-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: E0218 16:56:39.524168 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="proxy-httpd" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524176 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="proxy-httpd" Feb 18 16:56:39 crc kubenswrapper[4812]: E0218 16:56:39.524217 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="sg-core" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524228 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="sg-core" Feb 18 16:56:39 crc kubenswrapper[4812]: E0218 16:56:39.524243 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-central-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524252 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-central-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524502 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="sg-core" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524522 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-notification-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524539 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="proxy-httpd" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.524629 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" containerName="ceilometer-central-agent" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.531957 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.534506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.534997 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.535845 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.541001 4812 scope.go:117] "RemoveContainer" containerID="ed1d300ecfdb742f910327c2337cd95862ff9830014c56eb5f293c2415e52a5a" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.543019 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.568627 4812 scope.go:117] "RemoveContainer" containerID="3fec4def5b8f66effb46fbbb74fb16c27d8b14c8d5ba7bbf9e7f6c9ea254b419" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.630982 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631083 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbn6\" (UniqueName: \"kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631264 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.631410 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733562 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbn6\" (UniqueName: \"kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733656 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733679 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.733724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.734012 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.734190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.738086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.739441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.746589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.749327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.750499 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbn6\" (UniqueName: \"kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.762164 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.859774 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:39 crc kubenswrapper[4812]: I0218 16:56:39.962277 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.026890 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.038518 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="dnsmasq-dns" containerID="cri-o://cec33189e140e353ce9f3b7be124f21cfe85f714e4853dad789bf44b6d761960" gracePeriod=10 Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.076901 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.388604 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.393143 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.462365 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerStarted","Data":"d4334c09b5d12fc80385dd1220ea7d56057f2a8d142961d2626251129a715c37"} Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.465470 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerID="cec33189e140e353ce9f3b7be124f21cfe85f714e4853dad789bf44b6d761960" exitCode=0 Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.465527 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" event={"ID":"1a32938b-c202-455b-ad5d-0cc3b3f94693","Type":"ContainerDied","Data":"cec33189e140e353ce9f3b7be124f21cfe85f714e4853dad789bf44b6d761960"} Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.523258 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758e3147-bf40-4e81-b1b0-12fe9711e04d" path="/var/lib/kubelet/pods/758e3147-bf40-4e81-b1b0-12fe9711e04d/volumes" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.524207 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:56:40 crc kubenswrapper[4812]: E0218 16:56:40.524495 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.584898 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650913 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650958 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-655b7\" (UniqueName: \"kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.650987 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0\") pod \"1a32938b-c202-455b-ad5d-0cc3b3f94693\" (UID: \"1a32938b-c202-455b-ad5d-0cc3b3f94693\") " Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.660718 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7" (OuterVolumeSpecName: "kube-api-access-655b7") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "kube-api-access-655b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.719354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.730255 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.738429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config" (OuterVolumeSpecName: "config") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.744721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.754043 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-655b7\" (UniqueName: \"kubernetes.io/projected/1a32938b-c202-455b-ad5d-0cc3b3f94693-kube-api-access-655b7\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.754090 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.754121 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.754130 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.754140 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.759792 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a32938b-c202-455b-ad5d-0cc3b3f94693" (UID: "1a32938b-c202-455b-ad5d-0cc3b3f94693"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:56:40 crc kubenswrapper[4812]: I0218 16:56:40.856544 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a32938b-c202-455b-ad5d-0cc3b3f94693-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.477443 4812 generic.go:334] "Generic (PLEG): container finished" podID="15072f74-894a-40ee-9609-d58e29a27de8" containerID="aed82da0ce27d1d06780ffdb8739312c7f876f9006f674bf15d457529bf8b160" exitCode=0 Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.477546 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"15072f74-894a-40ee-9609-d58e29a27de8","Type":"ContainerDied","Data":"aed82da0ce27d1d06780ffdb8739312c7f876f9006f674bf15d457529bf8b160"} Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.482085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" event={"ID":"1a32938b-c202-455b-ad5d-0cc3b3f94693","Type":"ContainerDied","Data":"db258643cad66244b3952021f72cbb299b21f9e06e54fb5db45456b1c9046e2e"} Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.482162 4812 scope.go:117] "RemoveContainer" containerID="cec33189e140e353ce9f3b7be124f21cfe85f714e4853dad789bf44b6d761960" Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.482226 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-74xll" Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.541926 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.550467 4812 scope.go:117] "RemoveContainer" containerID="8158b8b2599890f6b8b8547367dd768ea83064a4e6e7d99826e9dcdaf5ace8b3" Feb 18 16:56:41 crc kubenswrapper[4812]: I0218 16:56:41.555720 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-74xll"] Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.131816 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.183814 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs\") pod \"15072f74-894a-40ee-9609-d58e29a27de8\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.184147 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca\") pod \"15072f74-894a-40ee-9609-d58e29a27de8\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.184206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75rj\" (UniqueName: \"kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj\") pod \"15072f74-894a-40ee-9609-d58e29a27de8\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.184251 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data\") pod \"15072f74-894a-40ee-9609-d58e29a27de8\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.184252 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs" (OuterVolumeSpecName: "logs") pod "15072f74-894a-40ee-9609-d58e29a27de8" (UID: "15072f74-894a-40ee-9609-d58e29a27de8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.184279 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle\") pod \"15072f74-894a-40ee-9609-d58e29a27de8\" (UID: \"15072f74-894a-40ee-9609-d58e29a27de8\") " Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.185085 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15072f74-894a-40ee-9609-d58e29a27de8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.201362 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj" (OuterVolumeSpecName: "kube-api-access-f75rj") pod "15072f74-894a-40ee-9609-d58e29a27de8" (UID: "15072f74-894a-40ee-9609-d58e29a27de8"). InnerVolumeSpecName "kube-api-access-f75rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.223599 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "15072f74-894a-40ee-9609-d58e29a27de8" (UID: "15072f74-894a-40ee-9609-d58e29a27de8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.273348 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15072f74-894a-40ee-9609-d58e29a27de8" (UID: "15072f74-894a-40ee-9609-d58e29a27de8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.286419 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75rj\" (UniqueName: \"kubernetes.io/projected/15072f74-894a-40ee-9609-d58e29a27de8-kube-api-access-f75rj\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.286450 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.286461 4812 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.356218 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data" (OuterVolumeSpecName: "config-data") pod "15072f74-894a-40ee-9609-d58e29a27de8" (UID: "15072f74-894a-40ee-9609-d58e29a27de8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.388806 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15072f74-894a-40ee-9609-d58e29a27de8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.492349 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.492363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"15072f74-894a-40ee-9609-d58e29a27de8","Type":"ContainerDied","Data":"d19c97982f4b747f17a8d451cdbcc5b98c5488d6ffc742dc9fc352f3bc7e513c"} Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.492752 4812 scope.go:117] "RemoveContainer" containerID="aed82da0ce27d1d06780ffdb8739312c7f876f9006f674bf15d457529bf8b160" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.496411 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerStarted","Data":"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46"} Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.526420 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" path="/var/lib/kubelet/pods/1a32938b-c202-455b-ad5d-0cc3b3f94693/volumes" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.562604 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.576144 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.589143 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:42 crc kubenswrapper[4812]: E0218 16:56:42.590132 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="dnsmasq-dns" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.590153 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="dnsmasq-dns" Feb 18 16:56:42 crc kubenswrapper[4812]: E0218 16:56:42.590166 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15072f74-894a-40ee-9609-d58e29a27de8" containerName="watcher-decision-engine" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.590174 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="15072f74-894a-40ee-9609-d58e29a27de8" containerName="watcher-decision-engine" Feb 18 16:56:42 crc kubenswrapper[4812]: E0218 16:56:42.590207 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="init" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.590215 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="init" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.590452 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a32938b-c202-455b-ad5d-0cc3b3f94693" containerName="dnsmasq-dns" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.590491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="15072f74-894a-40ee-9609-d58e29a27de8" containerName="watcher-decision-engine" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.591369 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.593721 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.607550 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.695043 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c298149-36ad-42bd-b736-f1fe48687edf-logs\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.695139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.695204 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.695373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.695582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k4pb\" (UniqueName: \"kubernetes.io/projected/1c298149-36ad-42bd-b736-f1fe48687edf-kube-api-access-4k4pb\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.797524 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.797705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.797848 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k4pb\" (UniqueName: \"kubernetes.io/projected/1c298149-36ad-42bd-b736-f1fe48687edf-kube-api-access-4k4pb\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.797950 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c298149-36ad-42bd-b736-f1fe48687edf-logs\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.798074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.798605 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c298149-36ad-42bd-b736-f1fe48687edf-logs\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.807397 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.807445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.809322 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c298149-36ad-42bd-b736-f1fe48687edf-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.815318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k4pb\" (UniqueName: \"kubernetes.io/projected/1c298149-36ad-42bd-b736-f1fe48687edf-kube-api-access-4k4pb\") pod \"watcher-decision-engine-0\" (UID: \"1c298149-36ad-42bd-b736-f1fe48687edf\") " pod="openstack/watcher-decision-engine-0" Feb 18 16:56:42 crc kubenswrapper[4812]: I0218 16:56:42.910343 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:43 crc kubenswrapper[4812]: I0218 16:56:43.416577 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 16:56:43 crc kubenswrapper[4812]: I0218 16:56:43.515923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1c298149-36ad-42bd-b736-f1fe48687edf","Type":"ContainerStarted","Data":"d3a8b4b80cb5fbefe252f9b5e359021ff2772d98b4a719178483de83f47d834e"} Feb 18 16:56:44 crc kubenswrapper[4812]: I0218 16:56:44.519639 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15072f74-894a-40ee-9609-d58e29a27de8" path="/var/lib/kubelet/pods/15072f74-894a-40ee-9609-d58e29a27de8/volumes" Feb 18 16:56:44 crc kubenswrapper[4812]: I0218 16:56:44.538775 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1c298149-36ad-42bd-b736-f1fe48687edf","Type":"ContainerStarted","Data":"a8df9bdb6983f765bac4992c3f8062a8bddfc7e5a76edbfea753bec1c1bd6a02"} Feb 18 16:56:44 crc kubenswrapper[4812]: I0218 16:56:44.541840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerStarted","Data":"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96"} Feb 18 16:56:44 crc kubenswrapper[4812]: I0218 16:56:44.564167 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.564146938 podStartE2EDuration="2.564146938s" podCreationTimestamp="2026-02-18 16:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:44.560148689 +0000 UTC m=+1624.825759608" watchObservedRunningTime="2026-02-18 16:56:44.564146938 +0000 UTC m=+1624.829757847" Feb 18 16:56:45 crc kubenswrapper[4812]: I0218 16:56:45.554749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerStarted","Data":"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f"} Feb 18 16:56:47 crc kubenswrapper[4812]: I0218 16:56:47.574702 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerStarted","Data":"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d"} Feb 18 16:56:47 crc kubenswrapper[4812]: I0218 16:56:47.575224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:56:47 crc kubenswrapper[4812]: I0218 16:56:47.606770 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7927070189999998 podStartE2EDuration="8.606750093s" podCreationTimestamp="2026-02-18 16:56:39 +0000 UTC" firstStartedPulling="2026-02-18 16:56:40.388392967 +0000 UTC m=+1620.654003876" lastFinishedPulling="2026-02-18 16:56:47.202436041 +0000 UTC m=+1627.468046950" observedRunningTime="2026-02-18 16:56:47.596591591 +0000 UTC m=+1627.862202500" watchObservedRunningTime="2026-02-18 16:56:47.606750093 +0000 UTC m=+1627.872361002" Feb 18 16:56:49 crc kubenswrapper[4812]: I0218 16:56:49.596823 4812 generic.go:334] "Generic (PLEG): container finished" podID="3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" containerID="7f21b1116b40c0468682137921a45b955c952440c4edb296e7781ec67d12af3b" exitCode=0 Feb 18 16:56:49 crc kubenswrapper[4812]: I0218 16:56:49.596908 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" event={"ID":"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99","Type":"ContainerDied","Data":"7f21b1116b40c0468682137921a45b955c952440c4edb296e7781ec67d12af3b"} Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.006487 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.047222 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.047553 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-central-agent" containerID="cri-o://c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46" gracePeriod=30 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.047692 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-notification-agent" containerID="cri-o://996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96" gracePeriod=30 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.047713 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="proxy-httpd" containerID="cri-o://d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d" gracePeriod=30 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.047658 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="sg-core" containerID="cri-o://db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f" gracePeriod=30 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.161037 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts\") pod \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.161125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbpn5\" (UniqueName: \"kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5\") pod \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.161183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle\") pod \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.161250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data\") pod \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\" (UID: \"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99\") " Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.176283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5" (OuterVolumeSpecName: "kube-api-access-lbpn5") pod "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" (UID: "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99"). InnerVolumeSpecName "kube-api-access-lbpn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.185529 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts" (OuterVolumeSpecName: "scripts") pod "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" (UID: "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.208250 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" (UID: "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.223587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data" (OuterVolumeSpecName: "config-data") pod "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" (UID: "3a4fadf8-05f1-41b5-bfba-e5af3ce82f99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.263580 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.263634 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbpn5\" (UniqueName: \"kubernetes.io/projected/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-kube-api-access-lbpn5\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.263653 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.263665 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.617886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" event={"ID":"3a4fadf8-05f1-41b5-bfba-e5af3ce82f99","Type":"ContainerDied","Data":"6375072a58375aa34a68741b927b8e5487988dda16e89a95815404d713ad9c9e"} Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.618255 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6375072a58375aa34a68741b927b8e5487988dda16e89a95815404d713ad9c9e" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.617927 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zdg8w" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621628 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerID="d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d" exitCode=0 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621665 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerID="db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f" exitCode=2 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621674 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerID="996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96" exitCode=0 Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerDied","Data":"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d"} Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerDied","Data":"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f"} Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.621746 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerDied","Data":"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96"} Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.741226 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 16:56:51 crc kubenswrapper[4812]: E0218 16:56:51.741759 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" containerName="nova-cell0-conductor-db-sync" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.741781 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" containerName="nova-cell0-conductor-db-sync" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.742022 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" containerName="nova-cell0-conductor-db-sync" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.743134 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.745950 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vczxl" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.746335 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.810609 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.876979 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzfnm\" (UniqueName: \"kubernetes.io/projected/6f62e31f-2bed-4621-8627-abfb596eaf43-kube-api-access-lzfnm\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.877217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.877325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.978748 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzfnm\" (UniqueName: \"kubernetes.io/projected/6f62e31f-2bed-4621-8627-abfb596eaf43-kube-api-access-lzfnm\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.979198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.979890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.985080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.987846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f62e31f-2bed-4621-8627-abfb596eaf43-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:51 crc kubenswrapper[4812]: I0218 16:56:51.997330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzfnm\" (UniqueName: \"kubernetes.io/projected/6f62e31f-2bed-4621-8627-abfb596eaf43-kube-api-access-lzfnm\") pod \"nova-cell0-conductor-0\" (UID: \"6f62e31f-2bed-4621-8627-abfb596eaf43\") " pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:52 crc kubenswrapper[4812]: I0218 16:56:52.066196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:52 crc kubenswrapper[4812]: I0218 16:56:52.563774 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 16:56:52 crc kubenswrapper[4812]: I0218 16:56:52.633749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6f62e31f-2bed-4621-8627-abfb596eaf43","Type":"ContainerStarted","Data":"1464e0637031952db476b393b0490927079cdd072fd3bb90d5eaf18539eb409e"} Feb 18 16:56:52 crc kubenswrapper[4812]: I0218 16:56:52.912143 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:52 crc kubenswrapper[4812]: I0218 16:56:52.952310 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.431797 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518521 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518585 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbn6\" (UniqueName: \"kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518769 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518931 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.518975 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml\") pod \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\" (UID: \"f8dd8cd1-425f-41a7-a4aa-88714f804eac\") " Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.520276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.523524 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.526360 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6" (OuterVolumeSpecName: "kube-api-access-brbn6") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "kube-api-access-brbn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.531709 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts" (OuterVolumeSpecName: "scripts") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.567460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.600746 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621893 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621926 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621936 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621949 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbn6\" (UniqueName: \"kubernetes.io/projected/f8dd8cd1-425f-41a7-a4aa-88714f804eac-kube-api-access-brbn6\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621962 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f8dd8cd1-425f-41a7-a4aa-88714f804eac-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.621976 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.633815 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.651501 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6f62e31f-2bed-4621-8627-abfb596eaf43","Type":"ContainerStarted","Data":"edf3813be2a2b9af8aab87c501d9c62bba9f99728f7d35584c25d5f1bc0ce66a"} Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.652220 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.654328 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data" (OuterVolumeSpecName: "config-data") pod "f8dd8cd1-425f-41a7-a4aa-88714f804eac" (UID: "f8dd8cd1-425f-41a7-a4aa-88714f804eac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.656571 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerID="c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46" exitCode=0 Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.656651 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.656791 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerDied","Data":"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46"} Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.656830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f8dd8cd1-425f-41a7-a4aa-88714f804eac","Type":"ContainerDied","Data":"d4334c09b5d12fc80385dd1220ea7d56057f2a8d142961d2626251129a715c37"} Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.656850 4812 scope.go:117] "RemoveContainer" containerID="d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.657397 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.688977 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.6889472899999998 podStartE2EDuration="2.68894729s" podCreationTimestamp="2026-02-18 16:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:53.681479635 +0000 UTC m=+1633.947090554" watchObservedRunningTime="2026-02-18 16:56:53.68894729 +0000 UTC m=+1633.954558199" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.691549 4812 scope.go:117] "RemoveContainer" containerID="db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.708538 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.713784 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.723804 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.723834 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8dd8cd1-425f-41a7-a4aa-88714f804eac-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.734136 4812 scope.go:117] "RemoveContainer" containerID="996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.735030 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.757372 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.757861 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="proxy-httpd" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.757875 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="proxy-httpd" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.757895 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="sg-core" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.757905 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="sg-core" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.757922 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-central-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.757929 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-central-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.757958 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-notification-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.757965 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-notification-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.758198 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="proxy-httpd" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.758222 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-central-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.758237 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="sg-core" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.758250 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" containerName="ceilometer-notification-agent" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.760360 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.770720 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.771111 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.771242 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.786978 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825335 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825415 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825445 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825597 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9qz\" (UniqueName: \"kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.825622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.835240 4812 scope.go:117] "RemoveContainer" containerID="c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.899856 4812 scope.go:117] "RemoveContainer" containerID="d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.900541 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d\": container with ID starting with d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d not found: ID does not exist" containerID="d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.900594 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d"} err="failed to get container status \"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d\": rpc error: code = NotFound desc = could not find container \"d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d\": container with ID starting with d5627a116c5800a5c2fb8d155500bde057350d46065ec39c8819636ff80fa34d not found: ID does not exist" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.900815 4812 scope.go:117] "RemoveContainer" containerID="db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.908364 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f\": container with ID starting with db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f not found: ID does not exist" containerID="db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.908417 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f"} err="failed to get container status \"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f\": rpc error: code = NotFound desc = could not find container \"db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f\": container with ID starting with db7cd2ba46f4b7ce11cdc7f3b0c7c07549aa37ce46ffeef4efb09b5e2d27ea1f not found: ID does not exist" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.908450 4812 scope.go:117] "RemoveContainer" containerID="996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.908941 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96\": container with ID starting with 996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96 not found: ID does not exist" containerID="996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.908970 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96"} err="failed to get container status \"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96\": rpc error: code = NotFound desc = could not find container \"996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96\": container with ID starting with 996c52e975579b9f68705fa7e45591d4ea3d7060cc37e0fec5cb0a7549f28b96 not found: ID does not exist" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.908988 4812 scope.go:117] "RemoveContainer" containerID="c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:53.909243 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46\": container with ID starting with c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46 not found: ID does not exist" containerID="c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.909261 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46"} err="failed to get container status \"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46\": rpc error: code = NotFound desc = could not find container \"c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46\": container with ID starting with c4d4f3c1331eddd6317d1163e0ffd983f7cba0a299c7338d3f5f58e0b9409d46 not found: ID does not exist" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928628 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928694 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928764 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928809 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928926 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9qz\" (UniqueName: \"kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.928960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.929144 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.929818 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.930217 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.935801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.943678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.946055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.948683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.949831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:53.954948 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9qz\" (UniqueName: \"kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz\") pod \"ceilometer-0\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:54.108743 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:54.508580 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:56:54 crc kubenswrapper[4812]: E0218 16:56:54.509352 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:54.527505 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dd8cd1-425f-41a7-a4aa-88714f804eac" path="/var/lib/kubelet/pods/f8dd8cd1-425f-41a7-a4aa-88714f804eac/volumes" Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:54.601062 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:56:54 crc kubenswrapper[4812]: W0218 16:56:54.604384 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc04c0a0_9857_4332_a1f2_f9368702349b.slice/crio-ce2eb5a594a7745a59d2cd435c77b61c67283dc55763451d4f11699467697dbc WatchSource:0}: Error finding container ce2eb5a594a7745a59d2cd435c77b61c67283dc55763451d4f11699467697dbc: Status 404 returned error can't find the container with id ce2eb5a594a7745a59d2cd435c77b61c67283dc55763451d4f11699467697dbc Feb 18 16:56:54 crc kubenswrapper[4812]: I0218 16:56:54.667256 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerStarted","Data":"ce2eb5a594a7745a59d2cd435c77b61c67283dc55763451d4f11699467697dbc"} Feb 18 16:56:55 crc kubenswrapper[4812]: I0218 16:56:55.677965 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerStarted","Data":"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43"} Feb 18 16:56:56 crc kubenswrapper[4812]: I0218 16:56:56.692269 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerStarted","Data":"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed"} Feb 18 16:56:56 crc kubenswrapper[4812]: I0218 16:56:56.988985 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:56:56 crc kubenswrapper[4812]: I0218 16:56:56.991658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.002934 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.101149 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.103182 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.103254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.103342 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8xd\" (UniqueName: \"kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.206290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.206371 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.206907 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.206917 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.207167 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8xd\" (UniqueName: \"kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.233648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8xd\" (UniqueName: \"kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd\") pod \"redhat-marketplace-mpph4\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.328660 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.743284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerStarted","Data":"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c"} Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.773551 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vgl82"] Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.777262 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.780690 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.781009 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.793645 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgl82"] Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.936657 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.960259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqts\" (UniqueName: \"kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.960521 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.960570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:57 crc kubenswrapper[4812]: I0218 16:56:57.960709 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.065906 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqts\" (UniqueName: \"kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.066353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.066385 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.066457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.078029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.081673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.092633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.109848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqts\" (UniqueName: \"kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts\") pod \"nova-cell0-cell-mapping-vgl82\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.163052 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.164860 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.177884 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.207894 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.220396 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.236756 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.238255 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.242003 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.273059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.273110 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.273221 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstht\" (UniqueName: \"kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.273270 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.331909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375505 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstht\" (UniqueName: \"kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.375787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm4x\" (UniqueName: \"kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.379242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.401658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.411038 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.425805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.434220 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.441559 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.450283 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.458818 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstht\" (UniqueName: \"kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht\") pod \"nova-api-0\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.477308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.477442 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm4x\" (UniqueName: \"kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.487728 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.490573 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.501695 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.503819 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.537774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm4x\" (UniqueName: \"kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x\") pod \"nova-scheduler-0\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.598239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.598320 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.598451 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.598495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrm8d\" (UniqueName: \"kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.629634 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.700410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.700760 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrm8d\" (UniqueName: \"kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.700856 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.700995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.701672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.710028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.712039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.808803 4812 generic.go:334] "Generic (PLEG): container finished" podID="584da31a-b836-4149-bb17-195580ee5898" containerID="007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1" exitCode=0 Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.808863 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerDied","Data":"007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1"} Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.808898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerStarted","Data":"c43ed285af15cfe455ef8df47cddf33b561fb48ebd18bc62866fa30e8bdbd46e"} Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.855768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrm8d\" (UniqueName: \"kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d\") pod \"nova-metadata-0\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " pod="openstack/nova-metadata-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.859595 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.861438 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.924234 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.974352 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.977771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:58 crc kubenswrapper[4812]: I0218 16:56:58.982315 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.004779 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.016021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sktxh\" (UniqueName: \"kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.017617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.017841 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.017905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.018059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.018140 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.037139 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121360 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121538 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121607 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmjj\" (UniqueName: \"kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sktxh\" (UniqueName: \"kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.121732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.122406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.122452 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.122578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.123407 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.123480 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.146604 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sktxh\" (UniqueName: \"kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh\") pod \"dnsmasq-dns-757b4f8459-m87z2\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.151529 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgl82"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.223858 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.223922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmjj\" (UniqueName: \"kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.224076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.229923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.245636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.247864 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.252912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmjj\" (UniqueName: \"kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj\") pod \"nova-cell1-novncproxy-0\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.330587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.428373 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.564692 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.796263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.854292 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerStarted","Data":"4dc5acb02ff5fe039b98b7d4a82a29167b7d8694df3a4a0ea762699bed2a5fea"} Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.858251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerStarted","Data":"0f0790266e26c45c377cbf3bdc7d53f6aa3250b91cedf378fcc8a2a94b40b0bd"} Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.859699 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85srd"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.862600 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.862528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgl82" event={"ID":"7dd32627-112f-4222-913d-a675587a7472","Type":"ContainerStarted","Data":"246d84e31991b7591eb2f6a6eef20aad4ab31f785f5abefa518991ca14058adc"} Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.863029 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgl82" event={"ID":"7dd32627-112f-4222-913d-a675587a7472","Type":"ContainerStarted","Data":"60a8dea72a9e8974b2f08417c999b6230d721e5940e43320a325990dd9b243e7"} Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.869346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"675e2fb3-9001-4221-b2a4-46f47d613919","Type":"ContainerStarted","Data":"4f0c2331ba8fa8ddb9073286926294334f6bc64414cb3536fc768a3fb96f7b53"} Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.869559 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.869576 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.891898 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85srd"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.892165 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vgl82" podStartSLOduration=2.892144949 podStartE2EDuration="2.892144949s" podCreationTimestamp="2026-02-18 16:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:56:59.891402171 +0000 UTC m=+1640.157013080" watchObservedRunningTime="2026-02-18 16:56:59.892144949 +0000 UTC m=+1640.157755868" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.952263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.973022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.974015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.974052 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z25nq\" (UniqueName: \"kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:56:59 crc kubenswrapper[4812]: I0218 16:56:59.974252 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.079831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.079940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.079979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.080015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z25nq\" (UniqueName: \"kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.088305 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.088488 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.096374 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.104699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.111823 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z25nq\" (UniqueName: \"kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq\") pod \"nova-cell1-conductor-db-sync-85srd\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.207988 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.247149 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.884857 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerStarted","Data":"455314acb4b5429e79e33a5631a7a6a38bade72dd69a0a5f47aa4b5c9d6806c1"} Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.885497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerStarted","Data":"1db5d138e1491c07702a29328e6b27db143d852e2d0de3904741d89dcde29c09"} Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.886753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"df4233d8-a9fa-4374-830b-1831c247f919","Type":"ContainerStarted","Data":"45635694cf40581252b69f8b2f7d8d6ef3bd05efbfb5b5ed482209b5eeee23df"} Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.900463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerStarted","Data":"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06"} Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.900634 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.907545 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85srd"] Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.908022 4812 generic.go:334] "Generic (PLEG): container finished" podID="584da31a-b836-4149-bb17-195580ee5898" containerID="f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73" exitCode=0 Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.909371 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerDied","Data":"f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73"} Feb 18 16:57:00 crc kubenswrapper[4812]: W0218 16:57:00.938497 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c79f36_851f_461c_87e0_72071e1b7e22.slice/crio-f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f WatchSource:0}: Error finding container f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f: Status 404 returned error can't find the container with id f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f Feb 18 16:57:00 crc kubenswrapper[4812]: I0218 16:57:00.954367 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.664522689 podStartE2EDuration="7.954343106s" podCreationTimestamp="2026-02-18 16:56:53 +0000 UTC" firstStartedPulling="2026-02-18 16:56:54.606898806 +0000 UTC m=+1634.872509715" lastFinishedPulling="2026-02-18 16:56:59.896719233 +0000 UTC m=+1640.162330132" observedRunningTime="2026-02-18 16:57:00.938912213 +0000 UTC m=+1641.204523122" watchObservedRunningTime="2026-02-18 16:57:00.954343106 +0000 UTC m=+1641.219954005" Feb 18 16:57:01 crc kubenswrapper[4812]: I0218 16:57:01.955087 4812 generic.go:334] "Generic (PLEG): container finished" podID="f70cd4ea-091d-4739-b827-358c376b32b1" containerID="455314acb4b5429e79e33a5631a7a6a38bade72dd69a0a5f47aa4b5c9d6806c1" exitCode=0 Feb 18 16:57:01 crc kubenswrapper[4812]: I0218 16:57:01.955949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerDied","Data":"455314acb4b5429e79e33a5631a7a6a38bade72dd69a0a5f47aa4b5c9d6806c1"} Feb 18 16:57:01 crc kubenswrapper[4812]: I0218 16:57:01.962155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85srd" event={"ID":"77c79f36-851f-461c-87e0-72071e1b7e22","Type":"ContainerStarted","Data":"3e4c35349c11fde8fcd51f4205b1fe847d6a381038f7f4dcc4cff42b3bcfe304"} Feb 18 16:57:01 crc kubenswrapper[4812]: I0218 16:57:01.962213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85srd" event={"ID":"77c79f36-851f-461c-87e0-72071e1b7e22","Type":"ContainerStarted","Data":"f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f"} Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.031674 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-85srd" podStartSLOduration=3.031653006 podStartE2EDuration="3.031653006s" podCreationTimestamp="2026-02-18 16:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:02.014054529 +0000 UTC m=+1642.279665448" watchObservedRunningTime="2026-02-18 16:57:02.031653006 +0000 UTC m=+1642.297263915" Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.428011 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c7874df7-hld7g" Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.546294 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.546519 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f99566fb-848lv" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-api" containerID="cri-o://864ccb30094be64435dcd975f5d783e2773acbf61e12a2c263e0a5cace141ce3" gracePeriod=30 Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.546878 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f99566fb-848lv" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-httpd" containerID="cri-o://a145a54ab5c6667dc546371899fe398054c70ec13ffe71a296ed3d5868ccbdda" gracePeriod=30 Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.856554 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.889475 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.990914 4812 generic.go:334] "Generic (PLEG): container finished" podID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerID="a145a54ab5c6667dc546371899fe398054c70ec13ffe71a296ed3d5868ccbdda" exitCode=0 Feb 18 16:57:02 crc kubenswrapper[4812]: I0218 16:57:02.990995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerDied","Data":"a145a54ab5c6667dc546371899fe398054c70ec13ffe71a296ed3d5868ccbdda"} Feb 18 16:57:04 crc kubenswrapper[4812]: I0218 16:57:04.009025 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerStarted","Data":"a18639cba02e7e9086a8058c19396825d4507a4d98edb287e96a558523789bce"} Feb 18 16:57:04 crc kubenswrapper[4812]: I0218 16:57:04.009324 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:57:04 crc kubenswrapper[4812]: I0218 16:57:04.037281 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" podStartSLOduration=6.037262041 podStartE2EDuration="6.037262041s" podCreationTimestamp="2026-02-18 16:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:04.03316236 +0000 UTC m=+1644.298773289" watchObservedRunningTime="2026-02-18 16:57:04.037262041 +0000 UTC m=+1644.302872950" Feb 18 16:57:06 crc kubenswrapper[4812]: I0218 16:57:06.032126 4812 generic.go:334] "Generic (PLEG): container finished" podID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerID="864ccb30094be64435dcd975f5d783e2773acbf61e12a2c263e0a5cace141ce3" exitCode=0 Feb 18 16:57:06 crc kubenswrapper[4812]: I0218 16:57:06.032281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerDied","Data":"864ccb30094be64435dcd975f5d783e2773acbf61e12a2c263e0a5cace141ce3"} Feb 18 16:57:06 crc kubenswrapper[4812]: I0218 16:57:06.507838 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:57:06 crc kubenswrapper[4812]: E0218 16:57:06.508373 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.705881 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.839090 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtpkm\" (UniqueName: \"kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm\") pod \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.839522 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs\") pod \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.839575 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config\") pod \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.839594 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config\") pod \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.839816 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle\") pod \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\" (UID: \"2bd83fe8-1dc8-45ab-8170-db3766c208a4\") " Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.846884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2bd83fe8-1dc8-45ab-8170-db3766c208a4" (UID: "2bd83fe8-1dc8-45ab-8170-db3766c208a4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.849218 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm" (OuterVolumeSpecName: "kube-api-access-mtpkm") pod "2bd83fe8-1dc8-45ab-8170-db3766c208a4" (UID: "2bd83fe8-1dc8-45ab-8170-db3766c208a4"). InnerVolumeSpecName "kube-api-access-mtpkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.891775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd83fe8-1dc8-45ab-8170-db3766c208a4" (UID: "2bd83fe8-1dc8-45ab-8170-db3766c208a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.901400 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config" (OuterVolumeSpecName: "config") pod "2bd83fe8-1dc8-45ab-8170-db3766c208a4" (UID: "2bd83fe8-1dc8-45ab-8170-db3766c208a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.934290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2bd83fe8-1dc8-45ab-8170-db3766c208a4" (UID: "2bd83fe8-1dc8-45ab-8170-db3766c208a4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.942305 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.942351 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtpkm\" (UniqueName: \"kubernetes.io/projected/2bd83fe8-1dc8-45ab-8170-db3766c208a4-kube-api-access-mtpkm\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.942366 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.942375 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:07 crc kubenswrapper[4812]: I0218 16:57:07.942384 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2bd83fe8-1dc8-45ab-8170-db3766c208a4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.066538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerStarted","Data":"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.070059 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f99566fb-848lv" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.070187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f99566fb-848lv" event={"ID":"2bd83fe8-1dc8-45ab-8170-db3766c208a4","Type":"ContainerDied","Data":"0b6bdae0bd1feec6785a37a14a623108ee3e767355407d8d25f2653890b5a22d"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.070267 4812 scope.go:117] "RemoveContainer" containerID="a145a54ab5c6667dc546371899fe398054c70ec13ffe71a296ed3d5868ccbdda" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.081061 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerStarted","Data":"8fa0c14188506757b437bf215dca45319d1ba04309eff2d29c1e38f922bd12b8"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.085531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"df4233d8-a9fa-4374-830b-1831c247f919","Type":"ContainerStarted","Data":"8b917e339de5f0b91a49a97fe60513874f5caebb28979a9fb51a90d316adf4e3"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.085643 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="df4233d8-a9fa-4374-830b-1831c247f919" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8b917e339de5f0b91a49a97fe60513874f5caebb28979a9fb51a90d316adf4e3" gracePeriod=30 Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.093888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"675e2fb3-9001-4221-b2a4-46f47d613919","Type":"ContainerStarted","Data":"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.103950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerStarted","Data":"fcde6560d528922d7aabd9485018485cf84522f9586002f601c06cfaa00547c0"} Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.133394 4812 scope.go:117] "RemoveContainer" containerID="864ccb30094be64435dcd975f5d783e2773acbf61e12a2c263e0a5cace141ce3" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.136195 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mpph4" podStartSLOduration=8.628867161 podStartE2EDuration="12.136170617s" podCreationTimestamp="2026-02-18 16:56:56 +0000 UTC" firstStartedPulling="2026-02-18 16:56:58.814845208 +0000 UTC m=+1639.080456117" lastFinishedPulling="2026-02-18 16:57:02.322148664 +0000 UTC m=+1642.587759573" observedRunningTime="2026-02-18 16:57:08.101026765 +0000 UTC m=+1648.366637684" watchObservedRunningTime="2026-02-18 16:57:08.136170617 +0000 UTC m=+1648.401781526" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.136815 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.146769 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f99566fb-848lv"] Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.164549 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.412718806 podStartE2EDuration="10.16452666s" podCreationTimestamp="2026-02-18 16:56:58 +0000 UTC" firstStartedPulling="2026-02-18 16:56:59.558918181 +0000 UTC m=+1639.824529090" lastFinishedPulling="2026-02-18 16:57:07.310726035 +0000 UTC m=+1647.576336944" observedRunningTime="2026-02-18 16:57:08.144114614 +0000 UTC m=+1648.409725523" watchObservedRunningTime="2026-02-18 16:57:08.16452666 +0000 UTC m=+1648.430137569" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.185892 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.13786209 podStartE2EDuration="10.18586645s" podCreationTimestamp="2026-02-18 16:56:58 +0000 UTC" firstStartedPulling="2026-02-18 16:57:00.276284171 +0000 UTC m=+1640.541895090" lastFinishedPulling="2026-02-18 16:57:07.324288541 +0000 UTC m=+1647.589899450" observedRunningTime="2026-02-18 16:57:08.173310918 +0000 UTC m=+1648.438921837" watchObservedRunningTime="2026-02-18 16:57:08.18586645 +0000 UTC m=+1648.451477359" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.533533 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" path="/var/lib/kubelet/pods/2bd83fe8-1dc8-45ab-8170-db3766c208a4/volumes" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.631487 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.631535 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 16:57:08 crc kubenswrapper[4812]: I0218 16:57:08.664088 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.118336 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerStarted","Data":"302696c45a0576987df0b9f5c9ac465f28ba3cb58b5cdc9e34509c0a4c0ee658"} Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.119979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerStarted","Data":"ac8a368215e831bea6a79d23812ebc884da704555ac71649f025d897f62574b5"} Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.120543 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-log" containerID="cri-o://fcde6560d528922d7aabd9485018485cf84522f9586002f601c06cfaa00547c0" gracePeriod=30 Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.120622 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-metadata" containerID="cri-o://ac8a368215e831bea6a79d23812ebc884da704555ac71649f025d897f62574b5" gracePeriod=30 Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.141730 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.29871164 podStartE2EDuration="11.141705426s" podCreationTimestamp="2026-02-18 16:56:58 +0000 UTC" firstStartedPulling="2026-02-18 16:56:59.487812987 +0000 UTC m=+1639.753423896" lastFinishedPulling="2026-02-18 16:57:07.330806773 +0000 UTC m=+1647.596417682" observedRunningTime="2026-02-18 16:57:09.135872731 +0000 UTC m=+1649.401483660" watchObservedRunningTime="2026-02-18 16:57:09.141705426 +0000 UTC m=+1649.407316335" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.151977 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.174453 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.675305144 podStartE2EDuration="11.174434858s" podCreationTimestamp="2026-02-18 16:56:58 +0000 UTC" firstStartedPulling="2026-02-18 16:56:59.823689711 +0000 UTC m=+1640.089300620" lastFinishedPulling="2026-02-18 16:57:07.322819425 +0000 UTC m=+1647.588430334" observedRunningTime="2026-02-18 16:57:09.164812769 +0000 UTC m=+1649.430423678" watchObservedRunningTime="2026-02-18 16:57:09.174434858 +0000 UTC m=+1649.440045767" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.254287 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.332483 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.354453 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.354764 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="dnsmasq-dns" containerID="cri-o://cecbd2b58692b82df5a1658b974b145481bb34b37cae24daf1212ae34cf1de84" gracePeriod=10 Feb 18 16:57:09 crc kubenswrapper[4812]: I0218 16:57:09.961834 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.137289 4812 generic.go:334] "Generic (PLEG): container finished" podID="7dd32627-112f-4222-913d-a675587a7472" containerID="246d84e31991b7591eb2f6a6eef20aad4ab31f785f5abefa518991ca14058adc" exitCode=0 Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.137362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgl82" event={"ID":"7dd32627-112f-4222-913d-a675587a7472","Type":"ContainerDied","Data":"246d84e31991b7591eb2f6a6eef20aad4ab31f785f5abefa518991ca14058adc"} Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.140784 4812 generic.go:334] "Generic (PLEG): container finished" podID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerID="ac8a368215e831bea6a79d23812ebc884da704555ac71649f025d897f62574b5" exitCode=0 Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.140809 4812 generic.go:334] "Generic (PLEG): container finished" podID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerID="fcde6560d528922d7aabd9485018485cf84522f9586002f601c06cfaa00547c0" exitCode=143 Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.140846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerDied","Data":"ac8a368215e831bea6a79d23812ebc884da704555ac71649f025d897f62574b5"} Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.140964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerDied","Data":"fcde6560d528922d7aabd9485018485cf84522f9586002f601c06cfaa00547c0"} Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.143456 4812 generic.go:334] "Generic (PLEG): container finished" podID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerID="cecbd2b58692b82df5a1658b974b145481bb34b37cae24daf1212ae34cf1de84" exitCode=0 Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.143652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" event={"ID":"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9","Type":"ContainerDied","Data":"cecbd2b58692b82df5a1658b974b145481bb34b37cae24daf1212ae34cf1de84"} Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.613500 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.625136 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.721671 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrm8d\" (UniqueName: \"kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d\") pod \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.721731 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.721839 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.721884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgwph\" (UniqueName: \"kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data\") pod \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722165 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc\") pod \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\" (UID: \"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722220 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs\") pod \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.722256 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle\") pod \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\" (UID: \"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34\") " Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.729315 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d" (OuterVolumeSpecName: "kube-api-access-jrm8d") pod "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" (UID: "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34"). InnerVolumeSpecName "kube-api-access-jrm8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.729357 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs" (OuterVolumeSpecName: "logs") pod "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" (UID: "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.731210 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph" (OuterVolumeSpecName: "kube-api-access-mgwph") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "kube-api-access-mgwph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.790375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.796616 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" (UID: "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.802536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data" (OuterVolumeSpecName: "config-data") pod "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" (UID: "92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.803269 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config" (OuterVolumeSpecName: "config") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.824983 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.825488 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.825580 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.825636 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrm8d\" (UniqueName: \"kubernetes.io/projected/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-kube-api-access-jrm8d\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.825688 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.826008 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgwph\" (UniqueName: \"kubernetes.io/projected/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-kube-api-access-mgwph\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.826143 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.853857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.854268 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.863563 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" (UID: "ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.927936 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.928212 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:10 crc kubenswrapper[4812]: I0218 16:57:10.928279 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.155119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" event={"ID":"ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9","Type":"ContainerDied","Data":"50f4235965213150044e32f27b11bfe0515dff6ed5b435f5c93a4f7e0df0e365"} Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.155171 4812 scope.go:117] "RemoveContainer" containerID="cecbd2b58692b82df5a1658b974b145481bb34b37cae24daf1212ae34cf1de84" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.155455 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pbq7m" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.157480 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34","Type":"ContainerDied","Data":"4dc5acb02ff5fe039b98b7d4a82a29167b7d8694df3a4a0ea762699bed2a5fea"} Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.157514 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.205937 4812 scope.go:117] "RemoveContainer" containerID="e5eab0d8af9eabc40b8dfe87477a9fb26b2bcb988c3ece93978c2df291fabb24" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.209563 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.237119 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pbq7m"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.248433 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.249664 4812 scope.go:117] "RemoveContainer" containerID="ac8a368215e831bea6a79d23812ebc884da704555ac71649f025d897f62574b5" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.282155 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.330353 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.330371 4812 scope.go:117] "RemoveContainer" containerID="fcde6560d528922d7aabd9485018485cf84522f9586002f601c06cfaa00547c0" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364370 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="init" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364447 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="init" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364527 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-httpd" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364540 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-httpd" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364596 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-metadata" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364605 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-metadata" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364662 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-api" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364671 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-api" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="dnsmasq-dns" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364781 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="dnsmasq-dns" Feb 18 16:57:11 crc kubenswrapper[4812]: E0218 16:57:11.364818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-log" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.364828 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-log" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.365895 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-api" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.365984 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd83fe8-1dc8-45ab-8170-db3766c208a4" containerName="neutron-httpd" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.366007 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-log" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.366027 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" containerName="nova-metadata-metadata" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.366046 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" containerName="dnsmasq-dns" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.370893 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.371015 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.381812 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.381844 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.544944 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.545598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.545663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.545756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6dt\" (UniqueName: \"kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.545975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.603500 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.647298 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.647383 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.647473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.647501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.647527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6dt\" (UniqueName: \"kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.650296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.654127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.660449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.662073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.666023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6dt\" (UniqueName: \"kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt\") pod \"nova-metadata-0\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.720038 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.749015 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data\") pod \"7dd32627-112f-4222-913d-a675587a7472\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.749125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts\") pod \"7dd32627-112f-4222-913d-a675587a7472\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.749166 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqts\" (UniqueName: \"kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts\") pod \"7dd32627-112f-4222-913d-a675587a7472\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.749183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle\") pod \"7dd32627-112f-4222-913d-a675587a7472\" (UID: \"7dd32627-112f-4222-913d-a675587a7472\") " Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.752794 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts" (OuterVolumeSpecName: "scripts") pod "7dd32627-112f-4222-913d-a675587a7472" (UID: "7dd32627-112f-4222-913d-a675587a7472"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.753577 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts" (OuterVolumeSpecName: "kube-api-access-jdqts") pod "7dd32627-112f-4222-913d-a675587a7472" (UID: "7dd32627-112f-4222-913d-a675587a7472"). InnerVolumeSpecName "kube-api-access-jdqts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.775260 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data" (OuterVolumeSpecName: "config-data") pod "7dd32627-112f-4222-913d-a675587a7472" (UID: "7dd32627-112f-4222-913d-a675587a7472"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.815301 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dd32627-112f-4222-913d-a675587a7472" (UID: "7dd32627-112f-4222-913d-a675587a7472"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.853670 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.853922 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.853932 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqts\" (UniqueName: \"kubernetes.io/projected/7dd32627-112f-4222-913d-a675587a7472-kube-api-access-jdqts\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:11 crc kubenswrapper[4812]: I0218 16:57:11.853943 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd32627-112f-4222-913d-a675587a7472-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.174903 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vgl82" event={"ID":"7dd32627-112f-4222-913d-a675587a7472","Type":"ContainerDied","Data":"60a8dea72a9e8974b2f08417c999b6230d721e5940e43320a325990dd9b243e7"} Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.174950 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a8dea72a9e8974b2f08417c999b6230d721e5940e43320a325990dd9b243e7" Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.174966 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vgl82" Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.218340 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:12 crc kubenswrapper[4812]: W0218 16:57:12.220298 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e46049_f695_4a51_a0f5_528275ee70e5.slice/crio-98a89e4d6d172794f16add0324402269418360d89c88f146359c03abed28a20d WatchSource:0}: Error finding container 98a89e4d6d172794f16add0324402269418360d89c88f146359c03abed28a20d: Status 404 returned error can't find the container with id 98a89e4d6d172794f16add0324402269418360d89c88f146359c03abed28a20d Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.343638 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.344244 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-log" containerID="cri-o://8fa0c14188506757b437bf215dca45319d1ba04309eff2d29c1e38f922bd12b8" gracePeriod=30 Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.344807 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-api" containerID="cri-o://302696c45a0576987df0b9f5c9ac465f28ba3cb58b5cdc9e34509c0a4c0ee658" gracePeriod=30 Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.368500 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.368743 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" containerName="nova-scheduler-scheduler" containerID="cri-o://5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" gracePeriod=30 Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.380024 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.527038 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34" path="/var/lib/kubelet/pods/92d2a91d-1a49-4a9d-8ccd-efe4df6b1c34/volumes" Feb 18 16:57:12 crc kubenswrapper[4812]: I0218 16:57:12.527816 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9" path="/var/lib/kubelet/pods/ab93384b-6944-4f1a-a0b4-c2f1cb0fc0a9/volumes" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.219393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerStarted","Data":"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.219710 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerStarted","Data":"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.219722 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerStarted","Data":"98a89e4d6d172794f16add0324402269418360d89c88f146359c03abed28a20d"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.219479 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-log" containerID="cri-o://728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" gracePeriod=30 Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.219847 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-metadata" containerID="cri-o://751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" gracePeriod=30 Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233424 4812 generic.go:334] "Generic (PLEG): container finished" podID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerID="302696c45a0576987df0b9f5c9ac465f28ba3cb58b5cdc9e34509c0a4c0ee658" exitCode=0 Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233459 4812 generic.go:334] "Generic (PLEG): container finished" podID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerID="8fa0c14188506757b437bf215dca45319d1ba04309eff2d29c1e38f922bd12b8" exitCode=143 Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerDied","Data":"302696c45a0576987df0b9f5c9ac465f28ba3cb58b5cdc9e34509c0a4c0ee658"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233547 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerDied","Data":"8fa0c14188506757b437bf215dca45319d1ba04309eff2d29c1e38f922bd12b8"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233558 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a235c1a-4d51-4b88-a876-942a39b0efd3","Type":"ContainerDied","Data":"0f0790266e26c45c377cbf3bdc7d53f6aa3250b91cedf378fcc8a2a94b40b0bd"} Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.233572 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0790266e26c45c377cbf3bdc7d53f6aa3250b91cedf378fcc8a2a94b40b0bd" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.282138 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.282117311 podStartE2EDuration="2.282117311s" podCreationTimestamp="2026-02-18 16:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:13.260079914 +0000 UTC m=+1653.525690823" watchObservedRunningTime="2026-02-18 16:57:13.282117311 +0000 UTC m=+1653.547728220" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.317603 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.401803 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle\") pod \"0a235c1a-4d51-4b88-a876-942a39b0efd3\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.402456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jstht\" (UniqueName: \"kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht\") pod \"0a235c1a-4d51-4b88-a876-942a39b0efd3\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.402598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs\") pod \"0a235c1a-4d51-4b88-a876-942a39b0efd3\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.402709 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data\") pod \"0a235c1a-4d51-4b88-a876-942a39b0efd3\" (UID: \"0a235c1a-4d51-4b88-a876-942a39b0efd3\") " Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.406349 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs" (OuterVolumeSpecName: "logs") pod "0a235c1a-4d51-4b88-a876-942a39b0efd3" (UID: "0a235c1a-4d51-4b88-a876-942a39b0efd3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.436610 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht" (OuterVolumeSpecName: "kube-api-access-jstht") pod "0a235c1a-4d51-4b88-a876-942a39b0efd3" (UID: "0a235c1a-4d51-4b88-a876-942a39b0efd3"). InnerVolumeSpecName "kube-api-access-jstht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.442967 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a235c1a-4d51-4b88-a876-942a39b0efd3" (UID: "0a235c1a-4d51-4b88-a876-942a39b0efd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.449421 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data" (OuterVolumeSpecName: "config-data") pod "0a235c1a-4d51-4b88-a876-942a39b0efd3" (UID: "0a235c1a-4d51-4b88-a876-942a39b0efd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.506439 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.506471 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a235c1a-4d51-4b88-a876-942a39b0efd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.506483 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jstht\" (UniqueName: \"kubernetes.io/projected/0a235c1a-4d51-4b88-a876-942a39b0efd3-kube-api-access-jstht\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:13 crc kubenswrapper[4812]: I0218 16:57:13.506492 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a235c1a-4d51-4b88-a876-942a39b0efd3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:13 crc kubenswrapper[4812]: E0218 16:57:13.633345 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:57:13 crc kubenswrapper[4812]: E0218 16:57:13.636494 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:57:13 crc kubenswrapper[4812]: E0218 16:57:13.638111 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:57:13 crc kubenswrapper[4812]: E0218 16:57:13.638151 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" containerName="nova-scheduler-scheduler" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.103068 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.219002 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt6dt\" (UniqueName: \"kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt\") pod \"79e46049-f695-4a51-a0f5-528275ee70e5\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.219388 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data\") pod \"79e46049-f695-4a51-a0f5-528275ee70e5\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.219418 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle\") pod \"79e46049-f695-4a51-a0f5-528275ee70e5\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.219479 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs\") pod \"79e46049-f695-4a51-a0f5-528275ee70e5\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.219541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs\") pod \"79e46049-f695-4a51-a0f5-528275ee70e5\" (UID: \"79e46049-f695-4a51-a0f5-528275ee70e5\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.221652 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs" (OuterVolumeSpecName: "logs") pod "79e46049-f695-4a51-a0f5-528275ee70e5" (UID: "79e46049-f695-4a51-a0f5-528275ee70e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.227833 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt" (OuterVolumeSpecName: "kube-api-access-gt6dt") pod "79e46049-f695-4a51-a0f5-528275ee70e5" (UID: "79e46049-f695-4a51-a0f5-528275ee70e5"). InnerVolumeSpecName "kube-api-access-gt6dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.243525 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.246594 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.246594 4812 generic.go:334] "Generic (PLEG): container finished" podID="675e2fb3-9001-4221-b2a4-46f47d613919" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" exitCode=0 Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.246706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"675e2fb3-9001-4221-b2a4-46f47d613919","Type":"ContainerDied","Data":"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62"} Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.246763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"675e2fb3-9001-4221-b2a4-46f47d613919","Type":"ContainerDied","Data":"4f0c2331ba8fa8ddb9073286926294334f6bc64414cb3536fc768a3fb96f7b53"} Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.246785 4812 scope.go:117] "RemoveContainer" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.250357 4812 generic.go:334] "Generic (PLEG): container finished" podID="79e46049-f695-4a51-a0f5-528275ee70e5" containerID="751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" exitCode=0 Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.250380 4812 generic.go:334] "Generic (PLEG): container finished" podID="79e46049-f695-4a51-a0f5-528275ee70e5" containerID="728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" exitCode=143 Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.250437 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.251468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerDied","Data":"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b"} Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.251535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerDied","Data":"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239"} Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.251552 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79e46049-f695-4a51-a0f5-528275ee70e5","Type":"ContainerDied","Data":"98a89e4d6d172794f16add0324402269418360d89c88f146359c03abed28a20d"} Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.251661 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.295941 4812 scope.go:117] "RemoveContainer" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.308079 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62\": container with ID starting with 5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62 not found: ID does not exist" containerID="5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.308397 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62"} err="failed to get container status \"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62\": rpc error: code = NotFound desc = could not find container \"5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62\": container with ID starting with 5b21de689980f0c7139110a122212758951354fdce4af341adcc481268c22f62 not found: ID does not exist" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.308565 4812 scope.go:117] "RemoveContainer" containerID="751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.317078 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data" (OuterVolumeSpecName: "config-data") pod "79e46049-f695-4a51-a0f5-528275ee70e5" (UID: "79e46049-f695-4a51-a0f5-528275ee70e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.322595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79e46049-f695-4a51-a0f5-528275ee70e5" (UID: "79e46049-f695-4a51-a0f5-528275ee70e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.323736 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlm4x\" (UniqueName: \"kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x\") pod \"675e2fb3-9001-4221-b2a4-46f47d613919\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.324056 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data\") pod \"675e2fb3-9001-4221-b2a4-46f47d613919\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.324088 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle\") pod \"675e2fb3-9001-4221-b2a4-46f47d613919\" (UID: \"675e2fb3-9001-4221-b2a4-46f47d613919\") " Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.325675 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt6dt\" (UniqueName: \"kubernetes.io/projected/79e46049-f695-4a51-a0f5-528275ee70e5-kube-api-access-gt6dt\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.325701 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.325713 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.325723 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e46049-f695-4a51-a0f5-528275ee70e5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.332283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x" (OuterVolumeSpecName: "kube-api-access-dlm4x") pod "675e2fb3-9001-4221-b2a4-46f47d613919" (UID: "675e2fb3-9001-4221-b2a4-46f47d613919"). InnerVolumeSpecName "kube-api-access-dlm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.351064 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "79e46049-f695-4a51-a0f5-528275ee70e5" (UID: "79e46049-f695-4a51-a0f5-528275ee70e5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.354461 4812 scope.go:117] "RemoveContainer" containerID="728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.355517 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.374432 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.384148 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385024 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd32627-112f-4222-913d-a675587a7472" containerName="nova-manage" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385047 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd32627-112f-4222-913d-a675587a7472" containerName="nova-manage" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385067 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-log" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385075 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-log" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385115 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-metadata" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385122 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-metadata" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385142 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" containerName="nova-scheduler-scheduler" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385149 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" containerName="nova-scheduler-scheduler" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385156 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-api" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385162 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-api" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.385170 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-log" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385177 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-log" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385413 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-api" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385437 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd32627-112f-4222-913d-a675587a7472" containerName="nova-manage" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385443 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-log" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385455 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" containerName="nova-scheduler-scheduler" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385468 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" containerName="nova-api-log" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.385477 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" containerName="nova-metadata-metadata" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.386298 4812 scope.go:117] "RemoveContainer" containerID="751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.386595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data" (OuterVolumeSpecName: "config-data") pod "675e2fb3-9001-4221-b2a4-46f47d613919" (UID: "675e2fb3-9001-4221-b2a4-46f47d613919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.386747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.386894 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b\": container with ID starting with 751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b not found: ID does not exist" containerID="751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.387013 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b"} err="failed to get container status \"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b\": rpc error: code = NotFound desc = could not find container \"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b\": container with ID starting with 751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b not found: ID does not exist" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.387233 4812 scope.go:117] "RemoveContainer" containerID="728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" Feb 18 16:57:14 crc kubenswrapper[4812]: E0218 16:57:14.388225 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239\": container with ID starting with 728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239 not found: ID does not exist" containerID="728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.388372 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239"} err="failed to get container status \"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239\": rpc error: code = NotFound desc = could not find container \"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239\": container with ID starting with 728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239 not found: ID does not exist" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.388476 4812 scope.go:117] "RemoveContainer" containerID="751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.388771 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "675e2fb3-9001-4221-b2a4-46f47d613919" (UID: "675e2fb3-9001-4221-b2a4-46f47d613919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.389575 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.391644 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b"} err="failed to get container status \"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b\": rpc error: code = NotFound desc = could not find container \"751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b\": container with ID starting with 751e93e0b44c49beaa873700722487ab8b505701b0d6c1a0a66b14dcae2c722b not found: ID does not exist" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.391682 4812 scope.go:117] "RemoveContainer" containerID="728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.392231 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239"} err="failed to get container status \"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239\": rpc error: code = NotFound desc = could not find container \"728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239\": container with ID starting with 728f3a5123042163f2549a518ad666a6f76007655c6a2cfc7bc543ca74d79239 not found: ID does not exist" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.397622 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.427434 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e46049-f695-4a51-a0f5-528275ee70e5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.427464 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.427477 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675e2fb3-9001-4221-b2a4-46f47d613919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.427485 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlm4x\" (UniqueName: \"kubernetes.io/projected/675e2fb3-9001-4221-b2a4-46f47d613919-kube-api-access-dlm4x\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.522452 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a235c1a-4d51-4b88-a876-942a39b0efd3" path="/var/lib/kubelet/pods/0a235c1a-4d51-4b88-a876-942a39b0efd3/volumes" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.529311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.529629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszhb\" (UniqueName: \"kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.529751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.530030 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.588431 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.597353 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.610854 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.624416 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.631408 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.631471 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.631569 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszhb\" (UniqueName: \"kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.631607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.633629 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.636382 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.636395 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.645040 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.646642 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.650868 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.656704 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszhb\" (UniqueName: \"kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb\") pod \"nova-api-0\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.678241 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.695875 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.697744 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.700587 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.700637 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.710213 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.711765 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.739301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.739590 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bl42\" (UniqueName: \"kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.739639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841154 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bl42\" (UniqueName: \"kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841332 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841359 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr54n\" (UniqueName: \"kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841558 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.841636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.850679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.851394 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.862831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bl42\" (UniqueName: \"kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42\") pod \"nova-scheduler-0\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " pod="openstack/nova-scheduler-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944438 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr54n\" (UniqueName: \"kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944538 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.944839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.947888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.948147 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.948923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:14 crc kubenswrapper[4812]: I0218 16:57:14.959625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr54n\" (UniqueName: \"kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n\") pod \"nova-metadata-0\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " pod="openstack/nova-metadata-0" Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.147650 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.158081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.172733 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.291639 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerStarted","Data":"5a2f7bd381b96c7a416404c1a6ab6f466a90af046c829320aad3560e27cc306c"} Feb 18 16:57:15 crc kubenswrapper[4812]: W0218 16:57:15.641252 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode25d8f60_bc58_4057_ab3c_1f06c24e781b.slice/crio-07b9881a49a0d2bba7f38303b64205d0bf2069d7438fd8a2238b628a2c695bb0 WatchSource:0}: Error finding container 07b9881a49a0d2bba7f38303b64205d0bf2069d7438fd8a2238b628a2c695bb0: Status 404 returned error can't find the container with id 07b9881a49a0d2bba7f38303b64205d0bf2069d7438fd8a2238b628a2c695bb0 Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.643585 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:57:15 crc kubenswrapper[4812]: I0218 16:57:15.710899 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.304680 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerStarted","Data":"a163df77416a64f48a806f99ea069e2792bceb23044f8df779e0a1e0d6efec8c"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.304720 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerStarted","Data":"ea42ea77027f8f17414e49487bdb145ab894717ee91c08eddb0923ba4067493c"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.304730 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerStarted","Data":"1e68b161da9875d8c54d4a77c8bea5070d6123f846767c653f67af13998e48c9"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.306420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerStarted","Data":"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.306464 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerStarted","Data":"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.309014 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e25d8f60-bc58-4057-ab3c-1f06c24e781b","Type":"ContainerStarted","Data":"91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.309039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e25d8f60-bc58-4057-ab3c-1f06c24e781b","Type":"ContainerStarted","Data":"07b9881a49a0d2bba7f38303b64205d0bf2069d7438fd8a2238b628a2c695bb0"} Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.383183 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.383156916 podStartE2EDuration="2.383156916s" podCreationTimestamp="2026-02-18 16:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:16.322198024 +0000 UTC m=+1656.587808933" watchObservedRunningTime="2026-02-18 16:57:16.383156916 +0000 UTC m=+1656.648767825" Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.386713 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.386702994 podStartE2EDuration="2.386702994s" podCreationTimestamp="2026-02-18 16:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:16.349292456 +0000 UTC m=+1656.614903365" watchObservedRunningTime="2026-02-18 16:57:16.386702994 +0000 UTC m=+1656.652313913" Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.396695 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.396677042 podStartE2EDuration="2.396677042s" podCreationTimestamp="2026-02-18 16:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:16.368337588 +0000 UTC m=+1656.633948517" watchObservedRunningTime="2026-02-18 16:57:16.396677042 +0000 UTC m=+1656.662287951" Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.521557 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675e2fb3-9001-4221-b2a4-46f47d613919" path="/var/lib/kubelet/pods/675e2fb3-9001-4221-b2a4-46f47d613919/volumes" Feb 18 16:57:16 crc kubenswrapper[4812]: I0218 16:57:16.524802 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e46049-f695-4a51-a0f5-528275ee70e5" path="/var/lib/kubelet/pods/79e46049-f695-4a51-a0f5-528275ee70e5/volumes" Feb 18 16:57:17 crc kubenswrapper[4812]: I0218 16:57:17.328854 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:17 crc kubenswrapper[4812]: I0218 16:57:17.329187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:18 crc kubenswrapper[4812]: I0218 16:57:18.381389 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mpph4" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" probeResult="failure" output=< Feb 18 16:57:18 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:57:18 crc kubenswrapper[4812]: > Feb 18 16:57:20 crc kubenswrapper[4812]: I0218 16:57:20.147856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 16:57:20 crc kubenswrapper[4812]: I0218 16:57:20.159031 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 16:57:20 crc kubenswrapper[4812]: I0218 16:57:20.159082 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 16:57:21 crc kubenswrapper[4812]: I0218 16:57:21.508045 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:57:21 crc kubenswrapper[4812]: E0218 16:57:21.508366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:57:24 crc kubenswrapper[4812]: I0218 16:57:24.118921 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 16:57:24 crc kubenswrapper[4812]: I0218 16:57:24.712956 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:57:24 crc kubenswrapper[4812]: I0218 16:57:24.714146 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.148517 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.158679 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.159136 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.182795 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.427333 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.801325 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:25 crc kubenswrapper[4812]: I0218 16:57:25.801336 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:26 crc kubenswrapper[4812]: I0218 16:57:26.179319 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:26 crc kubenswrapper[4812]: I0218 16:57:26.179620 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:28 crc kubenswrapper[4812]: I0218 16:57:28.404087 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mpph4" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" probeResult="failure" output=< Feb 18 16:57:28 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 16:57:28 crc kubenswrapper[4812]: > Feb 18 16:57:34 crc kubenswrapper[4812]: I0218 16:57:34.715932 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 16:57:34 crc kubenswrapper[4812]: I0218 16:57:34.716733 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 16:57:34 crc kubenswrapper[4812]: I0218 16:57:34.716873 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 16:57:34 crc kubenswrapper[4812]: I0218 16:57:34.721964 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.164742 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.166428 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.170050 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.484397 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.491090 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.491305 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.508003 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:57:35 crc kubenswrapper[4812]: E0218 16:57:35.508386 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.677805 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.679947 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.700338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.700397 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bng8l\" (UniqueName: \"kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.700598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.700621 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.700854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.701029 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.709874 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803436 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803551 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.803663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bng8l\" (UniqueName: \"kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.804973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.805650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.805842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.805846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.805935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:35 crc kubenswrapper[4812]: I0218 16:57:35.829043 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bng8l\" (UniqueName: \"kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l\") pod \"dnsmasq-dns-89c5cd4d5-xjs65\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:36 crc kubenswrapper[4812]: I0218 16:57:36.011007 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:36 crc kubenswrapper[4812]: W0218 16:57:36.597668 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4023942_1810_40f4_90ab_8bb60749c701.slice/crio-e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144 WatchSource:0}: Error finding container e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144: Status 404 returned error can't find the container with id e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144 Feb 18 16:57:36 crc kubenswrapper[4812]: I0218 16:57:36.597773 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.469931 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.518329 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4023942-1810-40f4-90ab-8bb60749c701" containerID="efc686d66cc92ed86b7da9d8c989ea167b2f36d347a16fbffb605fd163f56d16" exitCode=0 Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.518449 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" event={"ID":"b4023942-1810-40f4-90ab-8bb60749c701","Type":"ContainerDied","Data":"efc686d66cc92ed86b7da9d8c989ea167b2f36d347a16fbffb605fd163f56d16"} Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.518487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" event={"ID":"b4023942-1810-40f4-90ab-8bb60749c701","Type":"ContainerStarted","Data":"e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144"} Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.599977 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:37 crc kubenswrapper[4812]: I0218 16:57:37.736166 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.234693 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.428790 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.429146 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-central-agent" containerID="cri-o://16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.429526 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="sg-core" containerID="cri-o://10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.429547 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-notification-agent" containerID="cri-o://8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.429512 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="proxy-httpd" containerID="cri-o://abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.551062 4812 generic.go:334] "Generic (PLEG): container finished" podID="df4233d8-a9fa-4374-830b-1831c247f919" containerID="8b917e339de5f0b91a49a97fe60513874f5caebb28979a9fb51a90d316adf4e3" exitCode=137 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.551230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"df4233d8-a9fa-4374-830b-1831c247f919","Type":"ContainerDied","Data":"8b917e339de5f0b91a49a97fe60513874f5caebb28979a9fb51a90d316adf4e3"} Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.562300 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mpph4" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" containerID="cri-o://6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0" gracePeriod=2 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.563418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" event={"ID":"b4023942-1810-40f4-90ab-8bb60749c701","Type":"ContainerStarted","Data":"18b6bb67866aa801b00340daaef8b5397548215fe9ccd04bb57d123e7c425eb2"} Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.563642 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.563842 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-log" containerID="cri-o://f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.564085 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-api" containerID="cri-o://762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193" gracePeriod=30 Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.611301 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" podStartSLOduration=3.611279058 podStartE2EDuration="3.611279058s" podCreationTimestamp="2026-02-18 16:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:38.60048558 +0000 UTC m=+1678.866096489" watchObservedRunningTime="2026-02-18 16:57:38.611279058 +0000 UTC m=+1678.876889967" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.783699 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.873888 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmjj\" (UniqueName: \"kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj\") pod \"df4233d8-a9fa-4374-830b-1831c247f919\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.873972 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle\") pod \"df4233d8-a9fa-4374-830b-1831c247f919\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.874058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data\") pod \"df4233d8-a9fa-4374-830b-1831c247f919\" (UID: \"df4233d8-a9fa-4374-830b-1831c247f919\") " Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.885547 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj" (OuterVolumeSpecName: "kube-api-access-4fmjj") pod "df4233d8-a9fa-4374-830b-1831c247f919" (UID: "df4233d8-a9fa-4374-830b-1831c247f919"). InnerVolumeSpecName "kube-api-access-4fmjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.917380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df4233d8-a9fa-4374-830b-1831c247f919" (UID: "df4233d8-a9fa-4374-830b-1831c247f919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.931490 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data" (OuterVolumeSpecName: "config-data") pod "df4233d8-a9fa-4374-830b-1831c247f919" (UID: "df4233d8-a9fa-4374-830b-1831c247f919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.976159 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmjj\" (UniqueName: \"kubernetes.io/projected/df4233d8-a9fa-4374-830b-1831c247f919-kube-api-access-4fmjj\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.976190 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:38 crc kubenswrapper[4812]: I0218 16:57:38.976199 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4233d8-a9fa-4374-830b-1831c247f919-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.070131 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.185077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities\") pod \"584da31a-b836-4149-bb17-195580ee5898\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.185212 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv8xd\" (UniqueName: \"kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd\") pod \"584da31a-b836-4149-bb17-195580ee5898\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.185242 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content\") pod \"584da31a-b836-4149-bb17-195580ee5898\" (UID: \"584da31a-b836-4149-bb17-195580ee5898\") " Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.185868 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities" (OuterVolumeSpecName: "utilities") pod "584da31a-b836-4149-bb17-195580ee5898" (UID: "584da31a-b836-4149-bb17-195580ee5898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.195860 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd" (OuterVolumeSpecName: "kube-api-access-jv8xd") pod "584da31a-b836-4149-bb17-195580ee5898" (UID: "584da31a-b836-4149-bb17-195580ee5898"). InnerVolumeSpecName "kube-api-access-jv8xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.209430 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584da31a-b836-4149-bb17-195580ee5898" (UID: "584da31a-b836-4149-bb17-195580ee5898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.287370 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.287405 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584da31a-b836-4149-bb17-195580ee5898-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.287415 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv8xd\" (UniqueName: \"kubernetes.io/projected/584da31a-b836-4149-bb17-195580ee5898-kube-api-access-jv8xd\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.574142 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.574136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"df4233d8-a9fa-4374-830b-1831c247f919","Type":"ContainerDied","Data":"45635694cf40581252b69f8b2f7d8d6ef3bd05efbfb5b5ed482209b5eeee23df"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.574313 4812 scope.go:117] "RemoveContainer" containerID="8b917e339de5f0b91a49a97fe60513874f5caebb28979a9fb51a90d316adf4e3" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579011 4812 generic.go:334] "Generic (PLEG): container finished" podID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerID="abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06" exitCode=0 Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579315 4812 generic.go:334] "Generic (PLEG): container finished" podID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerID="10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c" exitCode=2 Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579325 4812 generic.go:334] "Generic (PLEG): container finished" podID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerID="16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43" exitCode=0 Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerDied","Data":"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerDied","Data":"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.579415 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerDied","Data":"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.583162 4812 generic.go:334] "Generic (PLEG): container finished" podID="acc6ca48-80c7-46b3-8214-d474aca72893" containerID="f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765" exitCode=143 Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.583245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerDied","Data":"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.587543 4812 generic.go:334] "Generic (PLEG): container finished" podID="584da31a-b836-4149-bb17-195580ee5898" containerID="6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0" exitCode=0 Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.587627 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerDied","Data":"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.587661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpph4" event={"ID":"584da31a-b836-4149-bb17-195580ee5898","Type":"ContainerDied","Data":"c43ed285af15cfe455ef8df47cddf33b561fb48ebd18bc62866fa30e8bdbd46e"} Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.587691 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpph4" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.631793 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.633641 4812 scope.go:117] "RemoveContainer" containerID="6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.651607 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.668955 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.677905 4812 scope.go:117] "RemoveContainer" containerID="f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.682190 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpph4"] Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.697330 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.698403 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="extract-utilities" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698426 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="extract-utilities" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.698468 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4233d8-a9fa-4374-830b-1831c247f919" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698477 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4233d8-a9fa-4374-830b-1831c247f919" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.698497 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698508 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.698524 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="extract-content" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698533 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="extract-content" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698800 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="584da31a-b836-4149-bb17-195580ee5898" containerName="registry-server" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.698830 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4233d8-a9fa-4374-830b-1831c247f919" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.699651 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.703413 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.703752 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.719659 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.722581 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.724167 4812 scope.go:117] "RemoveContainer" containerID="007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.797891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hvq\" (UniqueName: \"kubernetes.io/projected/bc2c8be6-d665-457a-a0ae-0297547d9227-kube-api-access-l6hvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.798079 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.798420 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.798474 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.798523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.813493 4812 scope.go:117] "RemoveContainer" containerID="6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.814265 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0\": container with ID starting with 6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0 not found: ID does not exist" containerID="6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.814336 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0"} err="failed to get container status \"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0\": rpc error: code = NotFound desc = could not find container \"6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0\": container with ID starting with 6fbe05755e864f9ecea9b53647069c515ff9c5f419cef3d9da351343ca6c56c0 not found: ID does not exist" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.814364 4812 scope.go:117] "RemoveContainer" containerID="f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.814824 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73\": container with ID starting with f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73 not found: ID does not exist" containerID="f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.814860 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73"} err="failed to get container status \"f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73\": rpc error: code = NotFound desc = could not find container \"f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73\": container with ID starting with f118f01e8060bc88127d85596363df5333a9ff7abbd198354ced619136153f73 not found: ID does not exist" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.814888 4812 scope.go:117] "RemoveContainer" containerID="007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1" Feb 18 16:57:39 crc kubenswrapper[4812]: E0218 16:57:39.815194 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1\": container with ID starting with 007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1 not found: ID does not exist" containerID="007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.815216 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1"} err="failed to get container status \"007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1\": rpc error: code = NotFound desc = could not find container \"007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1\": container with ID starting with 007a19c5df83f02c064f4a38663efa11d36de39b759a6e7a3c25114633d400a1 not found: ID does not exist" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.899375 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.899492 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.899523 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.899550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.899583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hvq\" (UniqueName: \"kubernetes.io/projected/bc2c8be6-d665-457a-a0ae-0297547d9227-kube-api-access-l6hvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.905328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.905420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.905827 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.907557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc2c8be6-d665-457a-a0ae-0297547d9227-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:39 crc kubenswrapper[4812]: I0218 16:57:39.916905 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hvq\" (UniqueName: \"kubernetes.io/projected/bc2c8be6-d665-457a-a0ae-0297547d9227-kube-api-access-l6hvq\") pod \"nova-cell1-novncproxy-0\" (UID: \"bc2c8be6-d665-457a-a0ae-0297547d9227\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:40 crc kubenswrapper[4812]: I0218 16:57:40.163150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:40 crc kubenswrapper[4812]: I0218 16:57:40.523665 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584da31a-b836-4149-bb17-195580ee5898" path="/var/lib/kubelet/pods/584da31a-b836-4149-bb17-195580ee5898/volumes" Feb 18 16:57:40 crc kubenswrapper[4812]: I0218 16:57:40.525379 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4233d8-a9fa-4374-830b-1831c247f919" path="/var/lib/kubelet/pods/df4233d8-a9fa-4374-830b-1831c247f919/volumes" Feb 18 16:57:40 crc kubenswrapper[4812]: I0218 16:57:40.626587 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 16:57:41 crc kubenswrapper[4812]: I0218 16:57:41.616340 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bc2c8be6-d665-457a-a0ae-0297547d9227","Type":"ContainerStarted","Data":"4cedb3f1f272b24c2901b2a95499b5a10bb479a0530f65fb149f33d50613bb5c"} Feb 18 16:57:41 crc kubenswrapper[4812]: I0218 16:57:41.616403 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bc2c8be6-d665-457a-a0ae-0297547d9227","Type":"ContainerStarted","Data":"6582c846275c29847eabac6c2158fafafa45afe92b205a6cb1bd6162e31a513c"} Feb 18 16:57:41 crc kubenswrapper[4812]: I0218 16:57:41.635865 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.635843066 podStartE2EDuration="2.635843066s" podCreationTimestamp="2026-02-18 16:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:41.632503483 +0000 UTC m=+1681.898114392" watchObservedRunningTime="2026-02-18 16:57:41.635843066 +0000 UTC m=+1681.901453975" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.212595 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.376037 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data\") pod \"acc6ca48-80c7-46b3-8214-d474aca72893\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.376341 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs\") pod \"acc6ca48-80c7-46b3-8214-d474aca72893\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.376727 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszhb\" (UniqueName: \"kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb\") pod \"acc6ca48-80c7-46b3-8214-d474aca72893\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.376775 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle\") pod \"acc6ca48-80c7-46b3-8214-d474aca72893\" (UID: \"acc6ca48-80c7-46b3-8214-d474aca72893\") " Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.377807 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs" (OuterVolumeSpecName: "logs") pod "acc6ca48-80c7-46b3-8214-d474aca72893" (UID: "acc6ca48-80c7-46b3-8214-d474aca72893"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.378244 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acc6ca48-80c7-46b3-8214-d474aca72893-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.386445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb" (OuterVolumeSpecName: "kube-api-access-lszhb") pod "acc6ca48-80c7-46b3-8214-d474aca72893" (UID: "acc6ca48-80c7-46b3-8214-d474aca72893"). InnerVolumeSpecName "kube-api-access-lszhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.414661 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data" (OuterVolumeSpecName: "config-data") pod "acc6ca48-80c7-46b3-8214-d474aca72893" (UID: "acc6ca48-80c7-46b3-8214-d474aca72893"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.416774 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acc6ca48-80c7-46b3-8214-d474aca72893" (UID: "acc6ca48-80c7-46b3-8214-d474aca72893"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.480011 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.480053 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszhb\" (UniqueName: \"kubernetes.io/projected/acc6ca48-80c7-46b3-8214-d474aca72893-kube-api-access-lszhb\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.480066 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc6ca48-80c7-46b3-8214-d474aca72893-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.627567 4812 generic.go:334] "Generic (PLEG): container finished" podID="77c79f36-851f-461c-87e0-72071e1b7e22" containerID="3e4c35349c11fde8fcd51f4205b1fe847d6a381038f7f4dcc4cff42b3bcfe304" exitCode=0 Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.627612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85srd" event={"ID":"77c79f36-851f-461c-87e0-72071e1b7e22","Type":"ContainerDied","Data":"3e4c35349c11fde8fcd51f4205b1fe847d6a381038f7f4dcc4cff42b3bcfe304"} Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.630078 4812 generic.go:334] "Generic (PLEG): container finished" podID="acc6ca48-80c7-46b3-8214-d474aca72893" containerID="762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193" exitCode=0 Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.630137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerDied","Data":"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193"} Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.630199 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"acc6ca48-80c7-46b3-8214-d474aca72893","Type":"ContainerDied","Data":"5a2f7bd381b96c7a416404c1a6ab6f466a90af046c829320aad3560e27cc306c"} Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.630200 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.630218 4812 scope.go:117] "RemoveContainer" containerID="762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.701023 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.738303 4812 scope.go:117] "RemoveContainer" containerID="f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.738475 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.753805 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:42 crc kubenswrapper[4812]: E0218 16:57:42.754317 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-log" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.754336 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-log" Feb 18 16:57:42 crc kubenswrapper[4812]: E0218 16:57:42.754385 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-api" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.754398 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-api" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.754641 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-api" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.754670 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" containerName="nova-api-log" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.755849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.758404 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.758540 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.758603 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.772783 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.785314 4812 scope.go:117] "RemoveContainer" containerID="762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193" Feb 18 16:57:42 crc kubenswrapper[4812]: E0218 16:57:42.785934 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193\": container with ID starting with 762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193 not found: ID does not exist" containerID="762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.785972 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193"} err="failed to get container status \"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193\": rpc error: code = NotFound desc = could not find container \"762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193\": container with ID starting with 762505935b014631f3cec3edb37aa64d6a041093c96ac3f8c15e48e5ea139193 not found: ID does not exist" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.786003 4812 scope.go:117] "RemoveContainer" containerID="f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765" Feb 18 16:57:42 crc kubenswrapper[4812]: E0218 16:57:42.786335 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765\": container with ID starting with f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765 not found: ID does not exist" containerID="f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.786376 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765"} err="failed to get container status \"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765\": rpc error: code = NotFound desc = could not find container \"f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765\": container with ID starting with f4daaaa2cab82c807d82e09a8089724e2ff8d2df730974f056c9249b97f1c765 not found: ID does not exist" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.888305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.888399 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.888531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.888686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.888964 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4v7l\" (UniqueName: \"kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.889172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.995416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.995932 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.995977 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.996026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.996107 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4v7l\" (UniqueName: \"kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.996140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:42 crc kubenswrapper[4812]: I0218 16:57:42.996683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.001861 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.004534 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.004781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.006892 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.017178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4v7l\" (UniqueName: \"kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l\") pod \"nova-api-0\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.080477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.504286 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524121 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524240 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524435 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524461 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524497 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524518 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.524588 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9qz\" (UniqueName: \"kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz\") pod \"fc04c0a0-9857-4332-a1f2-f9368702349b\" (UID: \"fc04c0a0-9857-4332-a1f2-f9368702349b\") " Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.525292 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.525409 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.526235 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.526260 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc04c0a0-9857-4332-a1f2-f9368702349b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.529474 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz" (OuterVolumeSpecName: "kube-api-access-hb9qz") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "kube-api-access-hb9qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.534227 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts" (OuterVolumeSpecName: "scripts") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.605312 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.609232 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.624471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.628151 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.628181 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.628193 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.628203 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9qz\" (UniqueName: \"kubernetes.io/projected/fc04c0a0-9857-4332-a1f2-f9368702349b-kube-api-access-hb9qz\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.643581 4812 generic.go:334] "Generic (PLEG): container finished" podID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerID="8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed" exitCode=0 Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.643660 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerDied","Data":"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed"} Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.643715 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.643755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc04c0a0-9857-4332-a1f2-f9368702349b","Type":"ContainerDied","Data":"ce2eb5a594a7745a59d2cd435c77b61c67283dc55763451d4f11699467697dbc"} Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.643814 4812 scope.go:117] "RemoveContainer" containerID="abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.658522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerStarted","Data":"16f5a6d831e8e1a5738d8fb58216860f530dd21a38f6e67f237761bb43c47a22"} Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.663781 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data" (OuterVolumeSpecName: "config-data") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.672460 4812 scope.go:117] "RemoveContainer" containerID="10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.674143 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc04c0a0-9857-4332-a1f2-f9368702349b" (UID: "fc04c0a0-9857-4332-a1f2-f9368702349b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.699775 4812 scope.go:117] "RemoveContainer" containerID="8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.724742 4812 scope.go:117] "RemoveContainer" containerID="16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.730476 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.730502 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc04c0a0-9857-4332-a1f2-f9368702349b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.754382 4812 scope.go:117] "RemoveContainer" containerID="abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06" Feb 18 16:57:43 crc kubenswrapper[4812]: E0218 16:57:43.754960 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06\": container with ID starting with abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06 not found: ID does not exist" containerID="abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.754992 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06"} err="failed to get container status \"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06\": rpc error: code = NotFound desc = could not find container \"abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06\": container with ID starting with abfb9210a5a758adbccd4aa4e2b8296d75843dbdd2297489bf4fa6a55b831e06 not found: ID does not exist" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.755014 4812 scope.go:117] "RemoveContainer" containerID="10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c" Feb 18 16:57:43 crc kubenswrapper[4812]: E0218 16:57:43.756084 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c\": container with ID starting with 10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c not found: ID does not exist" containerID="10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.756121 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c"} err="failed to get container status \"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c\": rpc error: code = NotFound desc = could not find container \"10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c\": container with ID starting with 10c67c56ece0d49ca9754b94a0789a008ea20dd380bfaef9221fcde25c38e27c not found: ID does not exist" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.756136 4812 scope.go:117] "RemoveContainer" containerID="8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed" Feb 18 16:57:43 crc kubenswrapper[4812]: E0218 16:57:43.756370 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed\": container with ID starting with 8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed not found: ID does not exist" containerID="8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.756393 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed"} err="failed to get container status \"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed\": rpc error: code = NotFound desc = could not find container \"8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed\": container with ID starting with 8e00e313383597ff2e62bd3d26cde02518ede37ebec92222df9abf08884af5ed not found: ID does not exist" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.756410 4812 scope.go:117] "RemoveContainer" containerID="16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43" Feb 18 16:57:43 crc kubenswrapper[4812]: E0218 16:57:43.756637 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43\": container with ID starting with 16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43 not found: ID does not exist" containerID="16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.757230 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43"} err="failed to get container status \"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43\": rpc error: code = NotFound desc = could not find container \"16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43\": container with ID starting with 16d63d7c6f22977159417642b0137c5d58db9a231323af554aeec5e86c027a43 not found: ID does not exist" Feb 18 16:57:43 crc kubenswrapper[4812]: I0218 16:57:43.992362 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.009756 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.022169 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: E0218 16:57:44.022707 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-notification-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.022732 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-notification-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: E0218 16:57:44.022753 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-central-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.022762 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-central-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: E0218 16:57:44.022793 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="sg-core" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.022801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="sg-core" Feb 18 16:57:44 crc kubenswrapper[4812]: E0218 16:57:44.022811 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="proxy-httpd" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.022818 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="proxy-httpd" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.023039 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="proxy-httpd" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.023065 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-central-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.023079 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="ceilometer-notification-agent" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.023121 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" containerName="sg-core" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.025776 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.033610 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.033812 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.033918 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.043957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqpw\" (UniqueName: \"kubernetes.io/projected/f7e31fd2-effd-444c-9363-3f7cef593859-kube-api-access-hzqpw\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044078 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044145 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-scripts\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-config-data\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-log-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.044332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-run-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.062384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146587 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146637 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146661 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146685 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-scripts\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146720 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-config-data\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146738 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-log-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-run-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.146851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqpw\" (UniqueName: \"kubernetes.io/projected/f7e31fd2-effd-444c-9363-3f7cef593859-kube-api-access-hzqpw\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.147693 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-run-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.147774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7e31fd2-effd-444c-9363-3f7cef593859-log-httpd\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.150697 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.151539 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-scripts\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.151817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-config-data\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.152498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.154053 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7e31fd2-effd-444c-9363-3f7cef593859-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.167920 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqpw\" (UniqueName: \"kubernetes.io/projected/f7e31fd2-effd-444c-9363-3f7cef593859-kube-api-access-hzqpw\") pod \"ceilometer-0\" (UID: \"f7e31fd2-effd-444c-9363-3f7cef593859\") " pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.273425 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.277006 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.349907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data\") pod \"77c79f36-851f-461c-87e0-72071e1b7e22\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.349946 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle\") pod \"77c79f36-851f-461c-87e0-72071e1b7e22\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.350280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z25nq\" (UniqueName: \"kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq\") pod \"77c79f36-851f-461c-87e0-72071e1b7e22\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.350342 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts\") pod \"77c79f36-851f-461c-87e0-72071e1b7e22\" (UID: \"77c79f36-851f-461c-87e0-72071e1b7e22\") " Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.363675 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts" (OuterVolumeSpecName: "scripts") pod "77c79f36-851f-461c-87e0-72071e1b7e22" (UID: "77c79f36-851f-461c-87e0-72071e1b7e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.367457 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq" (OuterVolumeSpecName: "kube-api-access-z25nq") pod "77c79f36-851f-461c-87e0-72071e1b7e22" (UID: "77c79f36-851f-461c-87e0-72071e1b7e22"). InnerVolumeSpecName "kube-api-access-z25nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.397786 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data" (OuterVolumeSpecName: "config-data") pod "77c79f36-851f-461c-87e0-72071e1b7e22" (UID: "77c79f36-851f-461c-87e0-72071e1b7e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.398570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77c79f36-851f-461c-87e0-72071e1b7e22" (UID: "77c79f36-851f-461c-87e0-72071e1b7e22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.464871 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z25nq\" (UniqueName: \"kubernetes.io/projected/77c79f36-851f-461c-87e0-72071e1b7e22-kube-api-access-z25nq\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.464903 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.464917 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.464944 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c79f36-851f-461c-87e0-72071e1b7e22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.534267 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc6ca48-80c7-46b3-8214-d474aca72893" path="/var/lib/kubelet/pods/acc6ca48-80c7-46b3-8214-d474aca72893/volumes" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.535144 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc04c0a0-9857-4332-a1f2-f9368702349b" path="/var/lib/kubelet/pods/fc04c0a0-9857-4332-a1f2-f9368702349b/volumes" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.670521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-85srd" event={"ID":"77c79f36-851f-461c-87e0-72071e1b7e22","Type":"ContainerDied","Data":"f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f"} Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.670576 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7219330671522cca8912a3cd47f83293413626ebf3db6fba8fb836334f8648f" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.670583 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-85srd" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.680485 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerStarted","Data":"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba"} Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.680543 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerStarted","Data":"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680"} Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.725806 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.725781995 podStartE2EDuration="2.725781995s" podCreationTimestamp="2026-02-18 16:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:44.717165481 +0000 UTC m=+1684.982776400" watchObservedRunningTime="2026-02-18 16:57:44.725781995 +0000 UTC m=+1684.991392894" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.736871 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: E0218 16:57:44.737425 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c79f36-851f-461c-87e0-72071e1b7e22" containerName="nova-cell1-conductor-db-sync" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.737444 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c79f36-851f-461c-87e0-72071e1b7e22" containerName="nova-cell1-conductor-db-sync" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.737625 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c79f36-851f-461c-87e0-72071e1b7e22" containerName="nova-cell1-conductor-db-sync" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.738345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.740194 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.745827 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.840578 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.874234 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.874601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvrz\" (UniqueName: \"kubernetes.io/projected/3d3f7e00-7173-429d-957b-31388ff870d2-kube-api-access-kkvrz\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.874793 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.976796 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkvrz\" (UniqueName: \"kubernetes.io/projected/3d3f7e00-7173-429d-957b-31388ff870d2-kube-api-access-kkvrz\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.976929 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.977032 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.985884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:44 crc kubenswrapper[4812]: I0218 16:57:44.988821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3f7e00-7173-429d-957b-31388ff870d2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.008685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkvrz\" (UniqueName: \"kubernetes.io/projected/3d3f7e00-7173-429d-957b-31388ff870d2-kube-api-access-kkvrz\") pod \"nova-cell1-conductor-0\" (UID: \"3d3f7e00-7173-429d-957b-31388ff870d2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.062191 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.163569 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:45 crc kubenswrapper[4812]: W0218 16:57:45.561579 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3f7e00_7173_429d_957b_31388ff870d2.slice/crio-814eb7af14b70e2209b59b0c5fcff300508c336e208f1a5661f92c4857819c83 WatchSource:0}: Error finding container 814eb7af14b70e2209b59b0c5fcff300508c336e208f1a5661f92c4857819c83: Status 404 returned error can't find the container with id 814eb7af14b70e2209b59b0c5fcff300508c336e208f1a5661f92c4857819c83 Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.562334 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.699545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d3f7e00-7173-429d-957b-31388ff870d2","Type":"ContainerStarted","Data":"814eb7af14b70e2209b59b0c5fcff300508c336e208f1a5661f92c4857819c83"} Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.703206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7e31fd2-effd-444c-9363-3f7cef593859","Type":"ContainerStarted","Data":"40c3462b0fe263c576b97500c492f8963f1e22ca8900d608454f5e1c5ab968f4"} Feb 18 16:57:45 crc kubenswrapper[4812]: I0218 16:57:45.703250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7e31fd2-effd-444c-9363-3f7cef593859","Type":"ContainerStarted","Data":"918c383883a6a6e16cd1ec4752b3a54db7f1a2f1a91531990a3566d8796915ff"} Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.013253 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.089561 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.090423 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="dnsmasq-dns" containerID="cri-o://a18639cba02e7e9086a8058c19396825d4507a4d98edb287e96a558523789bce" gracePeriod=10 Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.508440 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:57:46 crc kubenswrapper[4812]: E0218 16:57:46.508699 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.714573 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d3f7e00-7173-429d-957b-31388ff870d2","Type":"ContainerStarted","Data":"6bf561b3045e3955f90d087a332f3603117f63b4f525f142b94b9ce72d1224bc"} Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.714995 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.717359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7e31fd2-effd-444c-9363-3f7cef593859","Type":"ContainerStarted","Data":"c726ca1c4a4382bdcbb6fd01ca96ed7f1aaeb0a6d9d138f3c802a819630b0953"} Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.723692 4812 generic.go:334] "Generic (PLEG): container finished" podID="f70cd4ea-091d-4739-b827-358c376b32b1" containerID="a18639cba02e7e9086a8058c19396825d4507a4d98edb287e96a558523789bce" exitCode=0 Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.723735 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerDied","Data":"a18639cba02e7e9086a8058c19396825d4507a4d98edb287e96a558523789bce"} Feb 18 16:57:46 crc kubenswrapper[4812]: I0218 16:57:46.737838 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.73781654 podStartE2EDuration="2.73781654s" podCreationTimestamp="2026-02-18 16:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:46.736540778 +0000 UTC m=+1687.002151717" watchObservedRunningTime="2026-02-18 16:57:46.73781654 +0000 UTC m=+1687.003427469" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.239123 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.354428 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.354519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.355007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.355140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.355181 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sktxh\" (UniqueName: \"kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.355243 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb\") pod \"f70cd4ea-091d-4739-b827-358c376b32b1\" (UID: \"f70cd4ea-091d-4739-b827-358c376b32b1\") " Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.371884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh" (OuterVolumeSpecName: "kube-api-access-sktxh") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "kube-api-access-sktxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.422821 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.435373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.441318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.442069 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.443576 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config" (OuterVolumeSpecName: "config") pod "f70cd4ea-091d-4739-b827-358c376b32b1" (UID: "f70cd4ea-091d-4739-b827-358c376b32b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.458018 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.458934 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.458990 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sktxh\" (UniqueName: \"kubernetes.io/projected/f70cd4ea-091d-4739-b827-358c376b32b1-kube-api-access-sktxh\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.459049 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.459115 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.459173 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f70cd4ea-091d-4739-b827-358c376b32b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.742567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7e31fd2-effd-444c-9363-3f7cef593859","Type":"ContainerStarted","Data":"6e0df8f072ae8accb0c2f9f413d80e0dd5c9c88de195560bddedfdb391c7982f"} Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.744652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" event={"ID":"f70cd4ea-091d-4739-b827-358c376b32b1","Type":"ContainerDied","Data":"1db5d138e1491c07702a29328e6b27db143d852e2d0de3904741d89dcde29c09"} Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.744722 4812 scope.go:117] "RemoveContainer" containerID="a18639cba02e7e9086a8058c19396825d4507a4d98edb287e96a558523789bce" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.744868 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-m87z2" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.786706 4812 scope.go:117] "RemoveContainer" containerID="455314acb4b5429e79e33a5631a7a6a38bade72dd69a0a5f47aa4b5c9d6806c1" Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.824184 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:57:47 crc kubenswrapper[4812]: I0218 16:57:47.857534 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-m87z2"] Feb 18 16:57:48 crc kubenswrapper[4812]: I0218 16:57:48.520378 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" path="/var/lib/kubelet/pods/f70cd4ea-091d-4739-b827-358c376b32b1/volumes" Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.097725 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.166352 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.187470 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.778713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7e31fd2-effd-444c-9363-3f7cef593859","Type":"ContainerStarted","Data":"82c8e7d2459d2e806e5d1361cbec7a9739c510304018ea8663ce8f2175bed403"} Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.805794 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.121339343 podStartE2EDuration="7.805770917s" podCreationTimestamp="2026-02-18 16:57:43 +0000 UTC" firstStartedPulling="2026-02-18 16:57:44.84163645 +0000 UTC m=+1685.107247359" lastFinishedPulling="2026-02-18 16:57:49.526068034 +0000 UTC m=+1689.791678933" observedRunningTime="2026-02-18 16:57:50.799575973 +0000 UTC m=+1691.065186902" watchObservedRunningTime="2026-02-18 16:57:50.805770917 +0000 UTC m=+1691.071381826" Feb 18 16:57:50 crc kubenswrapper[4812]: I0218 16:57:50.807413 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.008410 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5fxqc"] Feb 18 16:57:51 crc kubenswrapper[4812]: E0218 16:57:51.008943 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="dnsmasq-dns" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.008968 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="dnsmasq-dns" Feb 18 16:57:51 crc kubenswrapper[4812]: E0218 16:57:51.009005 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="init" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.009015 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="init" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.009263 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70cd4ea-091d-4739-b827-358c376b32b1" containerName="dnsmasq-dns" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.010077 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.013324 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.014608 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.021440 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fxqc"] Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.150994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqrx\" (UniqueName: \"kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.151255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.152291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.152501 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.254419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqrx\" (UniqueName: \"kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.254473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.254583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.254609 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.259076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.260652 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.267190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.271315 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqrx\" (UniqueName: \"kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx\") pod \"nova-cell1-cell-mapping-5fxqc\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.331445 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.672677 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fxqc"] Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.805560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fxqc" event={"ID":"dc3f37a0-cac2-4ac9-a087-ef87868855f7","Type":"ContainerStarted","Data":"3c0984aa19b958de9b15cc03a3631976946255e204995ec784461b0d767d4f8d"} Feb 18 16:57:51 crc kubenswrapper[4812]: I0218 16:57:51.805610 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 16:57:52 crc kubenswrapper[4812]: I0218 16:57:52.814653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fxqc" event={"ID":"dc3f37a0-cac2-4ac9-a087-ef87868855f7","Type":"ContainerStarted","Data":"fac5100eb8620201776fa1124ccacdbfb55267c5780e4777974f050f163d3f54"} Feb 18 16:57:52 crc kubenswrapper[4812]: I0218 16:57:52.839224 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5fxqc" podStartSLOduration=2.839206902 podStartE2EDuration="2.839206902s" podCreationTimestamp="2026-02-18 16:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:57:52.829698656 +0000 UTC m=+1693.095309565" watchObservedRunningTime="2026-02-18 16:57:52.839206902 +0000 UTC m=+1693.104817811" Feb 18 16:57:53 crc kubenswrapper[4812]: I0218 16:57:53.081005 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:57:53 crc kubenswrapper[4812]: I0218 16:57:53.081072 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:57:54 crc kubenswrapper[4812]: I0218 16:57:54.092277 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:54 crc kubenswrapper[4812]: I0218 16:57:54.092277 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:57:57 crc kubenswrapper[4812]: I0218 16:57:57.860156 4812 generic.go:334] "Generic (PLEG): container finished" podID="dc3f37a0-cac2-4ac9-a087-ef87868855f7" containerID="fac5100eb8620201776fa1124ccacdbfb55267c5780e4777974f050f163d3f54" exitCode=0 Feb 18 16:57:57 crc kubenswrapper[4812]: I0218 16:57:57.860243 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fxqc" event={"ID":"dc3f37a0-cac2-4ac9-a087-ef87868855f7","Type":"ContainerDied","Data":"fac5100eb8620201776fa1124ccacdbfb55267c5780e4777974f050f163d3f54"} Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.238149 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.333009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle\") pod \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.333184 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts\") pod \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.333377 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data\") pod \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.333464 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqrx\" (UniqueName: \"kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx\") pod \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\" (UID: \"dc3f37a0-cac2-4ac9-a087-ef87868855f7\") " Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.338916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts" (OuterVolumeSpecName: "scripts") pod "dc3f37a0-cac2-4ac9-a087-ef87868855f7" (UID: "dc3f37a0-cac2-4ac9-a087-ef87868855f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.339477 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx" (OuterVolumeSpecName: "kube-api-access-4dqrx") pod "dc3f37a0-cac2-4ac9-a087-ef87868855f7" (UID: "dc3f37a0-cac2-4ac9-a087-ef87868855f7"). InnerVolumeSpecName "kube-api-access-4dqrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.364919 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3f37a0-cac2-4ac9-a087-ef87868855f7" (UID: "dc3f37a0-cac2-4ac9-a087-ef87868855f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.379819 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data" (OuterVolumeSpecName: "config-data") pod "dc3f37a0-cac2-4ac9-a087-ef87868855f7" (UID: "dc3f37a0-cac2-4ac9-a087-ef87868855f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.435455 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.435485 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqrx\" (UniqueName: \"kubernetes.io/projected/dc3f37a0-cac2-4ac9-a087-ef87868855f7-kube-api-access-4dqrx\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.435497 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.435506 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3f37a0-cac2-4ac9-a087-ef87868855f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.510960 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:57:59 crc kubenswrapper[4812]: E0218 16:57:59.511264 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.880706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5fxqc" event={"ID":"dc3f37a0-cac2-4ac9-a087-ef87868855f7","Type":"ContainerDied","Data":"3c0984aa19b958de9b15cc03a3631976946255e204995ec784461b0d767d4f8d"} Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.880752 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0984aa19b958de9b15cc03a3631976946255e204995ec784461b0d767d4f8d" Feb 18 16:57:59 crc kubenswrapper[4812]: I0218 16:57:59.880758 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5fxqc" Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.077036 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.077392 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-log" containerID="cri-o://f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680" gracePeriod=30 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.078000 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-api" containerID="cri-o://6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba" gracePeriod=30 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.099956 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.100335 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" containerID="cri-o://91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" gracePeriod=30 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.111691 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.112474 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" containerID="cri-o://ea42ea77027f8f17414e49487bdb145ab894717ee91c08eddb0923ba4067493c" gracePeriod=30 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.112580 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" containerID="cri-o://a163df77416a64f48a806f99ea069e2792bceb23044f8df779e0a1e0d6efec8c" gracePeriod=30 Feb 18 16:58:00 crc kubenswrapper[4812]: E0218 16:58:00.151469 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:00 crc kubenswrapper[4812]: E0218 16:58:00.154417 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:00 crc kubenswrapper[4812]: E0218 16:58:00.155903 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:00 crc kubenswrapper[4812]: E0218 16:58:00.155951 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.903301 4812 generic.go:334] "Generic (PLEG): container finished" podID="bd168a29-174c-4210-93dd-e6fd2284d700" containerID="f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680" exitCode=143 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.903381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerDied","Data":"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680"} Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.905489 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerID="ea42ea77027f8f17414e49487bdb145ab894717ee91c08eddb0923ba4067493c" exitCode=143 Feb 18 16:58:00 crc kubenswrapper[4812]: I0218 16:58:00.905515 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerDied","Data":"ea42ea77027f8f17414e49487bdb145ab894717ee91c08eddb0923ba4067493c"} Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.244403 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:52682->10.217.0.224:8775: read: connection reset by peer" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.244480 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:52696->10.217.0.224:8775: read: connection reset by peer" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.824685 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.934012 4812 generic.go:334] "Generic (PLEG): container finished" podID="bd168a29-174c-4210-93dd-e6fd2284d700" containerID="6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba" exitCode=0 Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.934057 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerDied","Data":"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba"} Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.934083 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.934117 4812 scope.go:117] "RemoveContainer" containerID="6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.934093 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bd168a29-174c-4210-93dd-e6fd2284d700","Type":"ContainerDied","Data":"16f5a6d831e8e1a5738d8fb58216860f530dd21a38f6e67f237761bb43c47a22"} Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.937545 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerID="a163df77416a64f48a806f99ea069e2792bceb23044f8df779e0a1e0d6efec8c" exitCode=0 Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.937597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerDied","Data":"a163df77416a64f48a806f99ea069e2792bceb23044f8df779e0a1e0d6efec8c"} Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.937621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a","Type":"ContainerDied","Data":"1e68b161da9875d8c54d4a77c8bea5070d6123f846767c653f67af13998e48c9"} Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.937632 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e68b161da9875d8c54d4a77c8bea5070d6123f846767c653f67af13998e48c9" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949276 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4v7l\" (UniqueName: \"kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949597 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949619 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949688 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.949720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data\") pod \"bd168a29-174c-4210-93dd-e6fd2284d700\" (UID: \"bd168a29-174c-4210-93dd-e6fd2284d700\") " Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.950249 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs" (OuterVolumeSpecName: "logs") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.950580 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd168a29-174c-4210-93dd-e6fd2284d700-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.954991 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l" (OuterVolumeSpecName: "kube-api-access-b4v7l") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "kube-api-access-b4v7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.982441 4812 scope.go:117] "RemoveContainer" containerID="f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680" Feb 18 16:58:03 crc kubenswrapper[4812]: I0218 16:58:03.989479 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.057299 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.059673 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.059697 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4v7l\" (UniqueName: \"kubernetes.io/projected/bd168a29-174c-4210-93dd-e6fd2284d700-kube-api-access-b4v7l\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.074625 4812 scope.go:117] "RemoveContainer" containerID="6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.077080 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data" (OuterVolumeSpecName: "config-data") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.078254 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.088400 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba\": container with ID starting with 6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba not found: ID does not exist" containerID="6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.088447 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba"} err="failed to get container status \"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba\": rpc error: code = NotFound desc = could not find container \"6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba\": container with ID starting with 6d6bab205cbc3f05112302591802456f3090413fcf1a49ec89f69f111b2542ba not found: ID does not exist" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.088472 4812 scope.go:117] "RemoveContainer" containerID="f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.096265 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680\": container with ID starting with f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680 not found: ID does not exist" containerID="f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.096309 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680"} err="failed to get container status \"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680\": rpc error: code = NotFound desc = could not find container \"f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680\": container with ID starting with f8e8c27bf89952d039226833ab2fd1ba5d7c1a99c18cbf1e3dfd11d43585d680 not found: ID does not exist" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.102251 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd168a29-174c-4210-93dd-e6fd2284d700" (UID: "bd168a29-174c-4210-93dd-e6fd2284d700"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.169820 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs\") pod \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.169940 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle\") pod \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.169991 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr54n\" (UniqueName: \"kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n\") pod \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.170046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data\") pod \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.170123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs\") pod \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\" (UID: \"b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a\") " Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.170714 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.170753 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.170766 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd168a29-174c-4210-93dd-e6fd2284d700-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.174303 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs" (OuterVolumeSpecName: "logs") pod "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" (UID: "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.204331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n" (OuterVolumeSpecName: "kube-api-access-lr54n") pod "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" (UID: "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a"). InnerVolumeSpecName "kube-api-access-lr54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.262293 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data" (OuterVolumeSpecName: "config-data") pod "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" (UID: "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.272934 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr54n\" (UniqueName: \"kubernetes.io/projected/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-kube-api-access-lr54n\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.272967 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.272982 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.287534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" (UID: "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.317771 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" (UID: "b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.319083 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.335278 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353164 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.353628 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-api" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353642 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-api" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.353655 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3f37a0-cac2-4ac9-a087-ef87868855f7" containerName="nova-manage" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353661 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3f37a0-cac2-4ac9-a087-ef87868855f7" containerName="nova-manage" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.353682 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-log" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353690 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-log" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.353717 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353724 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" Feb 18 16:58:04 crc kubenswrapper[4812]: E0218 16:58:04.353740 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353747 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353964 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-log" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.353994 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-api" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.354004 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" containerName="nova-api-log" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.354019 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3f37a0-cac2-4ac9-a087-ef87868855f7" containerName="nova-manage" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.354031 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" containerName="nova-metadata-metadata" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.355272 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.358441 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.359886 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.360219 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.366459 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.378626 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.378659 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64753db2-4320-4180-9613-cf76f62101dc-logs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480632 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n576b\" (UniqueName: \"kubernetes.io/projected/64753db2-4320-4180-9613-cf76f62101dc-kube-api-access-n576b\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.480705 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-config-data\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.519480 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd168a29-174c-4210-93dd-e6fd2284d700" path="/var/lib/kubelet/pods/bd168a29-174c-4210-93dd-e6fd2284d700/volumes" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582298 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n576b\" (UniqueName: \"kubernetes.io/projected/64753db2-4320-4180-9613-cf76f62101dc-kube-api-access-n576b\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582396 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-config-data\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64753db2-4320-4180-9613-cf76f62101dc-logs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.582550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.583227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64753db2-4320-4180-9613-cf76f62101dc-logs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.586298 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.587540 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.595887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.595995 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64753db2-4320-4180-9613-cf76f62101dc-config-data\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.609466 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n576b\" (UniqueName: \"kubernetes.io/projected/64753db2-4320-4180-9613-cf76f62101dc-kube-api-access-n576b\") pod \"nova-api-0\" (UID: \"64753db2-4320-4180-9613-cf76f62101dc\") " pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.731829 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.957360 4812 generic.go:334] "Generic (PLEG): container finished" podID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" exitCode=0 Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.957432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e25d8f60-bc58-4057-ab3c-1f06c24e781b","Type":"ContainerDied","Data":"91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6"} Feb 18 16:58:04 crc kubenswrapper[4812]: I0218 16:58:04.960469 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.012008 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.025623 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.037891 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.049319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.052301 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.052992 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.052996 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 16:58:05 crc kubenswrapper[4812]: E0218 16:58:05.148738 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6 is running failed: container process not found" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:05 crc kubenswrapper[4812]: E0218 16:58:05.149233 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6 is running failed: container process not found" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:05 crc kubenswrapper[4812]: E0218 16:58:05.149585 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6 is running failed: container process not found" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 16:58:05 crc kubenswrapper[4812]: E0218 16:58:05.149619 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.195710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3402a7-2751-498c-af17-1895ac40880d-logs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.195760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774b2\" (UniqueName: \"kubernetes.io/projected/2e3402a7-2751-498c-af17-1895ac40880d-kube-api-access-774b2\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.195802 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-config-data\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.195859 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.195887 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.263977 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.297252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3402a7-2751-498c-af17-1895ac40880d-logs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.297320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774b2\" (UniqueName: \"kubernetes.io/projected/2e3402a7-2751-498c-af17-1895ac40880d-kube-api-access-774b2\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.297385 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-config-data\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.297468 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.297510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.299566 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3402a7-2751-498c-af17-1895ac40880d-logs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.304476 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-config-data\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.311633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.311789 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3402a7-2751-498c-af17-1895ac40880d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.317452 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774b2\" (UniqueName: \"kubernetes.io/projected/2e3402a7-2751-498c-af17-1895ac40880d-kube-api-access-774b2\") pod \"nova-metadata-0\" (UID: \"2e3402a7-2751-498c-af17-1895ac40880d\") " pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.380914 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.508649 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.514291 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle\") pod \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.514503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bl42\" (UniqueName: \"kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42\") pod \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.514659 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data\") pod \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\" (UID: \"e25d8f60-bc58-4057-ab3c-1f06c24e781b\") " Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.530913 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42" (OuterVolumeSpecName: "kube-api-access-8bl42") pod "e25d8f60-bc58-4057-ab3c-1f06c24e781b" (UID: "e25d8f60-bc58-4057-ab3c-1f06c24e781b"). InnerVolumeSpecName "kube-api-access-8bl42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.546484 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e25d8f60-bc58-4057-ab3c-1f06c24e781b" (UID: "e25d8f60-bc58-4057-ab3c-1f06c24e781b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.568116 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data" (OuterVolumeSpecName: "config-data") pod "e25d8f60-bc58-4057-ab3c-1f06c24e781b" (UID: "e25d8f60-bc58-4057-ab3c-1f06c24e781b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.617405 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bl42\" (UniqueName: \"kubernetes.io/projected/e25d8f60-bc58-4057-ab3c-1f06c24e781b-kube-api-access-8bl42\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.617901 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.617913 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e25d8f60-bc58-4057-ab3c-1f06c24e781b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.847484 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.978993 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64753db2-4320-4180-9613-cf76f62101dc","Type":"ContainerStarted","Data":"68db037b10b13220b74c7879a877cabf5f34cc10da0594071f22488af2d23eb7"} Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.979045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64753db2-4320-4180-9613-cf76f62101dc","Type":"ContainerStarted","Data":"27e33a3cd8f8efbe7241cb1db6542ad09c9e104f6abe5cb5cb6a21db558229a2"} Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.981632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e25d8f60-bc58-4057-ab3c-1f06c24e781b","Type":"ContainerDied","Data":"07b9881a49a0d2bba7f38303b64205d0bf2069d7438fd8a2238b628a2c695bb0"} Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.981669 4812 scope.go:117] "RemoveContainer" containerID="91f72afb064db57ba845989589e9e777bef6abd69f1ce9ea2af439e65da682b6" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.981798 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:58:05 crc kubenswrapper[4812]: I0218 16:58:05.987517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3402a7-2751-498c-af17-1895ac40880d","Type":"ContainerStarted","Data":"eba0db8d36e51b0e4091281741d894a17581f582db4b019fbd6e96af9883ab85"} Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.028948 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.042147 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.057072 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:06 crc kubenswrapper[4812]: E0218 16:58:06.057649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.057674 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.057909 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" containerName="nova-scheduler-scheduler" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.058775 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.060862 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.071971 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.132514 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-config-data\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.132587 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lz46\" (UniqueName: \"kubernetes.io/projected/089fc991-d92f-4f7d-9869-449514917e01-kube-api-access-9lz46\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.132674 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.234910 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-config-data\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.234968 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lz46\" (UniqueName: \"kubernetes.io/projected/089fc991-d92f-4f7d-9869-449514917e01-kube-api-access-9lz46\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.235048 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.238793 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.238935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/089fc991-d92f-4f7d-9869-449514917e01-config-data\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.250842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lz46\" (UniqueName: \"kubernetes.io/projected/089fc991-d92f-4f7d-9869-449514917e01-kube-api-access-9lz46\") pod \"nova-scheduler-0\" (UID: \"089fc991-d92f-4f7d-9869-449514917e01\") " pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.381583 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.554933 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a" path="/var/lib/kubelet/pods/b1feb21f-6bc6-4ec4-81ce-8c0f4a4a4f1a/volumes" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.566312 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25d8f60-bc58-4057-ab3c-1f06c24e781b" path="/var/lib/kubelet/pods/e25d8f60-bc58-4057-ab3c-1f06c24e781b/volumes" Feb 18 16:58:06 crc kubenswrapper[4812]: I0218 16:58:06.895465 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 16:58:06 crc kubenswrapper[4812]: W0218 16:58:06.900966 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod089fc991_d92f_4f7d_9869_449514917e01.slice/crio-c138ef8384d67feed29a9cb4b2068a474b16e68cc60878192a262a7915ab467e WatchSource:0}: Error finding container c138ef8384d67feed29a9cb4b2068a474b16e68cc60878192a262a7915ab467e: Status 404 returned error can't find the container with id c138ef8384d67feed29a9cb4b2068a474b16e68cc60878192a262a7915ab467e Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.005127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64753db2-4320-4180-9613-cf76f62101dc","Type":"ContainerStarted","Data":"fb719176d915833cce6e29cffe8d734e6d0a19e9d71bd7319ebb63df9ecaea92"} Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.014834 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3402a7-2751-498c-af17-1895ac40880d","Type":"ContainerStarted","Data":"b6c4418647e16ed880946322787fdf6a5026148bc2df1eac18cb4406a4865f74"} Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.014885 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e3402a7-2751-498c-af17-1895ac40880d","Type":"ContainerStarted","Data":"3327867a7b52513ce368c28c159dec1524cdf2e59607fcdb8b8cac9ed90ef799"} Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.016283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"089fc991-d92f-4f7d-9869-449514917e01","Type":"ContainerStarted","Data":"c138ef8384d67feed29a9cb4b2068a474b16e68cc60878192a262a7915ab467e"} Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.030002 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.029982773 podStartE2EDuration="3.029982773s" podCreationTimestamp="2026-02-18 16:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:58:07.02461522 +0000 UTC m=+1707.290226149" watchObservedRunningTime="2026-02-18 16:58:07.029982773 +0000 UTC m=+1707.295593682" Feb 18 16:58:07 crc kubenswrapper[4812]: I0218 16:58:07.056378 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.056353868 podStartE2EDuration="3.056353868s" podCreationTimestamp="2026-02-18 16:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:58:07.048362509 +0000 UTC m=+1707.313973448" watchObservedRunningTime="2026-02-18 16:58:07.056353868 +0000 UTC m=+1707.321964777" Feb 18 16:58:08 crc kubenswrapper[4812]: I0218 16:58:08.027189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"089fc991-d92f-4f7d-9869-449514917e01","Type":"ContainerStarted","Data":"e87ae7f2d8d50a16050050aa27b9f8732b1d469458b466b305ee403e76284637"} Feb 18 16:58:10 crc kubenswrapper[4812]: I0218 16:58:10.381962 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 16:58:10 crc kubenswrapper[4812]: I0218 16:58:10.382639 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 16:58:10 crc kubenswrapper[4812]: I0218 16:58:10.518494 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:58:10 crc kubenswrapper[4812]: E0218 16:58:10.518900 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:58:11 crc kubenswrapper[4812]: I0218 16:58:11.383290 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 16:58:12 crc kubenswrapper[4812]: I0218 16:58:12.160241 4812 scope.go:117] "RemoveContainer" containerID="55922bb314ae33f937d717d091601c14bc8e2381bef35fc93ef58177ea84e5dc" Feb 18 16:58:12 crc kubenswrapper[4812]: I0218 16:58:12.222523 4812 scope.go:117] "RemoveContainer" containerID="b8a4ce55725ec44a6d6c8f11ed5ea3494cb3b26214670b56ad7e21741430d257" Feb 18 16:58:12 crc kubenswrapper[4812]: I0218 16:58:12.269217 4812 scope.go:117] "RemoveContainer" containerID="ad5a9dba1a1c364094bd4818e563c66c2eced3374641c1af9880d2ec022d3586" Feb 18 16:58:12 crc kubenswrapper[4812]: I0218 16:58:12.313037 4812 scope.go:117] "RemoveContainer" containerID="cb441b52241e73310486dc17ee57f049da999ecc2976441274f8cff8ee249d2e" Feb 18 16:58:12 crc kubenswrapper[4812]: I0218 16:58:12.340120 4812 scope.go:117] "RemoveContainer" containerID="bb981a9dca7ffa18247c29b2e1f90811bbf37a43abd423a124484f6d00da0ebc" Feb 18 16:58:14 crc kubenswrapper[4812]: I0218 16:58:14.281662 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 16:58:14 crc kubenswrapper[4812]: I0218 16:58:14.305646 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.305629412 podStartE2EDuration="8.305629412s" podCreationTimestamp="2026-02-18 16:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:58:08.05754226 +0000 UTC m=+1708.323153189" watchObservedRunningTime="2026-02-18 16:58:14.305629412 +0000 UTC m=+1714.571240321" Feb 18 16:58:14 crc kubenswrapper[4812]: I0218 16:58:14.732767 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:58:14 crc kubenswrapper[4812]: I0218 16:58:14.732822 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 16:58:15 crc kubenswrapper[4812]: I0218 16:58:15.381142 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 16:58:15 crc kubenswrapper[4812]: I0218 16:58:15.381185 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 16:58:15 crc kubenswrapper[4812]: I0218 16:58:15.742273 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64753db2-4320-4180-9613-cf76f62101dc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:58:15 crc kubenswrapper[4812]: I0218 16:58:15.742302 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="64753db2-4320-4180-9613-cf76f62101dc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:58:16 crc kubenswrapper[4812]: I0218 16:58:16.383365 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 16:58:16 crc kubenswrapper[4812]: I0218 16:58:16.396413 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e3402a7-2751-498c-af17-1895ac40880d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:58:16 crc kubenswrapper[4812]: I0218 16:58:16.396455 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e3402a7-2751-498c-af17-1895ac40880d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 16:58:16 crc kubenswrapper[4812]: I0218 16:58:16.420774 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 16:58:17 crc kubenswrapper[4812]: I0218 16:58:17.142469 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.739860 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.740574 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.740931 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.740949 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.746699 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 16:58:24 crc kubenswrapper[4812]: I0218 16:58:24.746839 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 16:58:25 crc kubenswrapper[4812]: I0218 16:58:25.386307 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 16:58:25 crc kubenswrapper[4812]: I0218 16:58:25.388035 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 16:58:25 crc kubenswrapper[4812]: I0218 16:58:25.391604 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 16:58:25 crc kubenswrapper[4812]: I0218 16:58:25.508816 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:58:25 crc kubenswrapper[4812]: E0218 16:58:25.509520 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:58:26 crc kubenswrapper[4812]: I0218 16:58:26.203417 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 16:58:34 crc kubenswrapper[4812]: I0218 16:58:34.106190 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:58:34 crc kubenswrapper[4812]: I0218 16:58:34.998299 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:37 crc kubenswrapper[4812]: I0218 16:58:37.508537 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:58:37 crc kubenswrapper[4812]: E0218 16:58:37.509300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:58:38 crc kubenswrapper[4812]: I0218 16:58:38.396520 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" containerID="cri-o://ef790d1c7f46d4728ca5c66f52883b193cf403a2a870f0384109ddec0867e4af" gracePeriod=604796 Feb 18 16:58:38 crc kubenswrapper[4812]: I0218 16:58:38.957944 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" containerID="cri-o://5d4766015e413344722df453be9e416c24e2f4a4e1aac3c86059f18003f71920" gracePeriod=604797 Feb 18 16:58:39 crc kubenswrapper[4812]: I0218 16:58:39.092510 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 16:58:39 crc kubenswrapper[4812]: I0218 16:58:39.421346 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.379347 4812 generic.go:334] "Generic (PLEG): container finished" podID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerID="ef790d1c7f46d4728ca5c66f52883b193cf403a2a870f0384109ddec0867e4af" exitCode=0 Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.379494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerDied","Data":"ef790d1c7f46d4728ca5c66f52883b193cf403a2a870f0384109ddec0867e4af"} Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.537014 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648088 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648211 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2xd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648328 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648401 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648463 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648744 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648826 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.648930 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls\") pod \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\" (UID: \"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c\") " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.668419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.686659 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.686975 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.687521 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.694264 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd" (OuterVolumeSpecName: "kube-api-access-dq2xd") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "kube-api-access-dq2xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.694640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.705057 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf" (OuterVolumeSpecName: "server-conf") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.738166 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.738208 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data" (OuterVolumeSpecName: "config-data") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.751997 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752032 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752041 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752065 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752076 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752085 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752093 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2xd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-kube-api-access-dq2xd\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752125 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.752134 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.759933 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info" (OuterVolumeSpecName: "pod-info") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.778888 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.853956 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.853990 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.873966 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" (UID: "d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:45 crc kubenswrapper[4812]: I0218 16:58:45.956768 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.429366 4812 generic.go:334] "Generic (PLEG): container finished" podID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerID="5d4766015e413344722df453be9e416c24e2f4a4e1aac3c86059f18003f71920" exitCode=0 Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.429670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerDied","Data":"5d4766015e413344722df453be9e416c24e2f4a4e1aac3c86059f18003f71920"} Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.468980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c","Type":"ContainerDied","Data":"4268b435751aecab5844e7154db881f9ac9ad303cdb6118aa6ef7d9589659b7f"} Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.469042 4812 scope.go:117] "RemoveContainer" containerID="ef790d1c7f46d4728ca5c66f52883b193cf403a2a870f0384109ddec0867e4af" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.469218 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.594365 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.600877 4812 scope.go:117] "RemoveContainer" containerID="6e0af39e3db5bafcb21325602fdcc8df9f21ec9c7c2302bcb8fa57b4ae50a7df" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.773762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.773823 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.773870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5x4n\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774111 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774204 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774277 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774299 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.774319 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info\") pod \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\" (UID: \"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2\") " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.778786 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.782287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.782429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info" (OuterVolumeSpecName: "pod-info") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.782707 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.784584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.784923 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n" (OuterVolumeSpecName: "kube-api-access-r5x4n") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "kube-api-access-r5x4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.785972 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.786922 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.828619 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data" (OuterVolumeSpecName: "config-data") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.862690 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf" (OuterVolumeSpecName: "server-conf") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876795 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5x4n\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-kube-api-access-r5x4n\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876827 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876838 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876846 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876857 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876866 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876878 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876885 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876912 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.876921 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.899510 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.925660 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" (UID: "ddd0cfa4-b966-4127-a844-ec0c44cb7cd2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.979072 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:46 crc kubenswrapper[4812]: I0218 16:58:46.979127 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.480003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ddd0cfa4-b966-4127-a844-ec0c44cb7cd2","Type":"ContainerDied","Data":"7d2a218ba282068334d1cfd30430311a0e24b9c96c0005dcc46b9d174813bded"} Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.480306 4812 scope.go:117] "RemoveContainer" containerID="5d4766015e413344722df453be9e416c24e2f4a4e1aac3c86059f18003f71920" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.480018 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.514982 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.515069 4812 scope.go:117] "RemoveContainer" containerID="d2251c4c8cea65ebeaea46d31d5c2bea7c46e855105bb6a6016193f4f0a974a5" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.525979 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.548769 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:47 crc kubenswrapper[4812]: E0218 16:58:47.549268 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="setup-container" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549288 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="setup-container" Feb 18 16:58:47 crc kubenswrapper[4812]: E0218 16:58:47.549305 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="setup-container" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549312 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="setup-container" Feb 18 16:58:47 crc kubenswrapper[4812]: E0218 16:58:47.549324 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549332 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: E0218 16:58:47.549342 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549348 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549538 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.549548 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" containerName="rabbitmq" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.550563 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.553832 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.553900 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.553951 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.554153 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.554156 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.554323 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nnbr4" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.555130 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.567167 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692453 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692670 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6af8f1e1-753d-4010-90a4-8127e39198fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692752 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692869 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692899 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhz2\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-kube-api-access-4bhz2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692965 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.692990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.693039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6af8f1e1-753d-4010-90a4-8127e39198fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795222 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795287 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795331 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6af8f1e1-753d-4010-90a4-8127e39198fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhz2\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-kube-api-access-4bhz2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795525 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795548 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.795595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6af8f1e1-753d-4010-90a4-8127e39198fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.796178 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.796421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.798240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.797708 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.799011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.799393 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6af8f1e1-753d-4010-90a4-8127e39198fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.800144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6af8f1e1-753d-4010-90a4-8127e39198fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.800633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.801834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6af8f1e1-753d-4010-90a4-8127e39198fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.803390 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.812564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhz2\" (UniqueName: \"kubernetes.io/projected/6af8f1e1-753d-4010-90a4-8127e39198fa-kube-api-access-4bhz2\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.841053 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6af8f1e1-753d-4010-90a4-8127e39198fa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:47 crc kubenswrapper[4812]: I0218 16:58:47.939425 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:58:48 crc kubenswrapper[4812]: I0218 16:58:48.221272 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 16:58:48 crc kubenswrapper[4812]: W0218 16:58:48.227915 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af8f1e1_753d_4010_90a4_8127e39198fa.slice/crio-9f78877986ed140c3711388b5f2f983c5a83daaffc1dfe8f7e17e2ddbe29c9aa WatchSource:0}: Error finding container 9f78877986ed140c3711388b5f2f983c5a83daaffc1dfe8f7e17e2ddbe29c9aa: Status 404 returned error can't find the container with id 9f78877986ed140c3711388b5f2f983c5a83daaffc1dfe8f7e17e2ddbe29c9aa Feb 18 16:58:48 crc kubenswrapper[4812]: I0218 16:58:48.491052 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6af8f1e1-753d-4010-90a4-8127e39198fa","Type":"ContainerStarted","Data":"9f78877986ed140c3711388b5f2f983c5a83daaffc1dfe8f7e17e2ddbe29c9aa"} Feb 18 16:58:48 crc kubenswrapper[4812]: I0218 16:58:48.520231 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd0cfa4-b966-4127-a844-ec0c44cb7cd2" path="/var/lib/kubelet/pods/ddd0cfa4-b966-4127-a844-ec0c44cb7cd2/volumes" Feb 18 16:58:50 crc kubenswrapper[4812]: I0218 16:58:50.517881 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:58:50 crc kubenswrapper[4812]: E0218 16:58:50.518823 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:58:50 crc kubenswrapper[4812]: I0218 16:58:50.519811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6af8f1e1-753d-4010-90a4-8127e39198fa","Type":"ContainerStarted","Data":"f20844b3db5e5c3a7ad90a44a6ba5b11b47379d37df156e61bbb2359e0d72817"} Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.788069 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.789801 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.795079 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.803923 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.879871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.879969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.880001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.880052 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.880185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69cmn\" (UniqueName: \"kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.880244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.880296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982508 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69cmn\" (UniqueName: \"kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.982762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983384 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983743 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983798 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:51 crc kubenswrapper[4812]: I0218 16:58:51.983841 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:52 crc kubenswrapper[4812]: I0218 16:58:52.004985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69cmn\" (UniqueName: \"kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn\") pod \"dnsmasq-dns-79bd4cc8c9-fzslp\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:52 crc kubenswrapper[4812]: I0218 16:58:52.108315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:52 crc kubenswrapper[4812]: I0218 16:58:52.589075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:58:53 crc kubenswrapper[4812]: I0218 16:58:53.557798 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerID="6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797" exitCode=0 Feb 18 16:58:53 crc kubenswrapper[4812]: I0218 16:58:53.557995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" event={"ID":"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02","Type":"ContainerDied","Data":"6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797"} Feb 18 16:58:53 crc kubenswrapper[4812]: I0218 16:58:53.558146 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" event={"ID":"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02","Type":"ContainerStarted","Data":"9fbe446daccf024bdf227dfc91996fc3037b6e0053917b5871c4aef158569b78"} Feb 18 16:58:54 crc kubenswrapper[4812]: I0218 16:58:54.571619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" event={"ID":"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02","Type":"ContainerStarted","Data":"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8"} Feb 18 16:58:54 crc kubenswrapper[4812]: I0218 16:58:54.572303 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:58:54 crc kubenswrapper[4812]: I0218 16:58:54.596295 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" podStartSLOduration=3.596276192 podStartE2EDuration="3.596276192s" podCreationTimestamp="2026-02-18 16:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:58:54.589700079 +0000 UTC m=+1754.855310988" watchObservedRunningTime="2026-02-18 16:58:54.596276192 +0000 UTC m=+1754.861887101" Feb 18 16:59:01 crc kubenswrapper[4812]: I0218 16:59:01.508424 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:59:01 crc kubenswrapper[4812]: E0218 16:59:01.509298 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.109239 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.167827 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.168545 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="dnsmasq-dns" containerID="cri-o://18b6bb67866aa801b00340daaef8b5397548215fe9ccd04bb57d123e7c425eb2" gracePeriod=10 Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.353171 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-lddm7"] Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.355696 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.378022 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-lddm7"] Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.502923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503127 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503215 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-config\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503293 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.503328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dfx\" (UniqueName: \"kubernetes.io/projected/d87bbc96-67c0-4404-b76a-8613492aec13-kube-api-access-z6dfx\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dfx\" (UniqueName: \"kubernetes.io/projected/d87bbc96-67c0-4404-b76a-8613492aec13-kube-api-access-z6dfx\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605490 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605576 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.605622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-config\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.606467 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-config\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.607868 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.608699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.610015 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.610108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.610538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d87bbc96-67c0-4404-b76a-8613492aec13-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.637994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dfx\" (UniqueName: \"kubernetes.io/projected/d87bbc96-67c0-4404-b76a-8613492aec13-kube-api-access-z6dfx\") pod \"dnsmasq-dns-6cd9bffc9-lddm7\" (UID: \"d87bbc96-67c0-4404-b76a-8613492aec13\") " pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.659456 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4023942-1810-40f4-90ab-8bb60749c701" containerID="18b6bb67866aa801b00340daaef8b5397548215fe9ccd04bb57d123e7c425eb2" exitCode=0 Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.659545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" event={"ID":"b4023942-1810-40f4-90ab-8bb60749c701","Type":"ContainerDied","Data":"18b6bb67866aa801b00340daaef8b5397548215fe9ccd04bb57d123e7c425eb2"} Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.659579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" event={"ID":"b4023942-1810-40f4-90ab-8bb60749c701","Type":"ContainerDied","Data":"e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144"} Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.659594 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1132a442d133ebb052ae74507770798d6c58d247d80330314461d57139ef144" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.690766 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.734489 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.808831 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.808897 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.808986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bng8l\" (UniqueName: \"kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.809060 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.809245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.809291 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc\") pod \"b4023942-1810-40f4-90ab-8bb60749c701\" (UID: \"b4023942-1810-40f4-90ab-8bb60749c701\") " Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.830484 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l" (OuterVolumeSpecName: "kube-api-access-bng8l") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "kube-api-access-bng8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.867723 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.874964 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.891026 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config" (OuterVolumeSpecName: "config") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.895404 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.895995 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4023942-1810-40f4-90ab-8bb60749c701" (UID: "b4023942-1810-40f4-90ab-8bb60749c701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912294 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912345 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912359 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912372 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bng8l\" (UniqueName: \"kubernetes.io/projected/b4023942-1810-40f4-90ab-8bb60749c701-kube-api-access-bng8l\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912384 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:02 crc kubenswrapper[4812]: I0218 16:59:02.912394 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4023942-1810-40f4-90ab-8bb60749c701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.176074 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-lddm7"] Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.670628 4812 generic.go:334] "Generic (PLEG): container finished" podID="d87bbc96-67c0-4404-b76a-8613492aec13" containerID="33a7ebea97863b49fe771ec66d39f1d33bf5ed4263a38230c4a74e03b12e2622" exitCode=0 Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.670732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xjs65" Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.670691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" event={"ID":"d87bbc96-67c0-4404-b76a-8613492aec13","Type":"ContainerDied","Data":"33a7ebea97863b49fe771ec66d39f1d33bf5ed4263a38230c4a74e03b12e2622"} Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.671327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" event={"ID":"d87bbc96-67c0-4404-b76a-8613492aec13","Type":"ContainerStarted","Data":"e02cb746be74a06882913de7f62e70bcf555940a04d24c757c1695e4bfe85a8f"} Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.873128 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:59:03 crc kubenswrapper[4812]: I0218 16:59:03.883637 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xjs65"] Feb 18 16:59:04 crc kubenswrapper[4812]: I0218 16:59:04.520688 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4023942-1810-40f4-90ab-8bb60749c701" path="/var/lib/kubelet/pods/b4023942-1810-40f4-90ab-8bb60749c701/volumes" Feb 18 16:59:04 crc kubenswrapper[4812]: I0218 16:59:04.683822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" event={"ID":"d87bbc96-67c0-4404-b76a-8613492aec13","Type":"ContainerStarted","Data":"c13b9112806d45e3e78b735eeca5eba23815df288dec6a973fcbc60203dd3cb0"} Feb 18 16:59:04 crc kubenswrapper[4812]: I0218 16:59:04.685899 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:04 crc kubenswrapper[4812]: I0218 16:59:04.706996 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" podStartSLOduration=2.706977296 podStartE2EDuration="2.706977296s" podCreationTimestamp="2026-02-18 16:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:59:04.704604137 +0000 UTC m=+1764.970215056" watchObservedRunningTime="2026-02-18 16:59:04.706977296 +0000 UTC m=+1764.972588215" Feb 18 16:59:12 crc kubenswrapper[4812]: I0218 16:59:12.692334 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-lddm7" Feb 18 16:59:12 crc kubenswrapper[4812]: I0218 16:59:12.752509 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:59:12 crc kubenswrapper[4812]: I0218 16:59:12.752759 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="dnsmasq-dns" containerID="cri-o://8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8" gracePeriod=10 Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.521318 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712258 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712306 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69cmn\" (UniqueName: \"kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712424 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712545 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.712646 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0\") pod \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\" (UID: \"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02\") " Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.728435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn" (OuterVolumeSpecName: "kube-api-access-69cmn") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "kube-api-access-69cmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.772935 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.787759 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config" (OuterVolumeSpecName: "config") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.789067 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerID="8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8" exitCode=0 Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.789191 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" event={"ID":"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02","Type":"ContainerDied","Data":"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8"} Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.789307 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" event={"ID":"e2d945fb-c61b-4cb5-965a-d2d72a9c0c02","Type":"ContainerDied","Data":"9fbe446daccf024bdf227dfc91996fc3037b6e0053917b5871c4aef158569b78"} Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.789287 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-fzslp" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.789339 4812 scope.go:117] "RemoveContainer" containerID="8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.790734 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.796324 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.799788 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.801935 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" (UID: "e2d945fb-c61b-4cb5-965a-d2d72a9c0c02"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815212 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815257 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815270 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815282 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815295 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-config\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815306 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69cmn\" (UniqueName: \"kubernetes.io/projected/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-kube-api-access-69cmn\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.815321 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.846171 4812 scope.go:117] "RemoveContainer" containerID="6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.869988 4812 scope.go:117] "RemoveContainer" containerID="8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8" Feb 18 16:59:13 crc kubenswrapper[4812]: E0218 16:59:13.870875 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8\": container with ID starting with 8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8 not found: ID does not exist" containerID="8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.870928 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8"} err="failed to get container status \"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8\": rpc error: code = NotFound desc = could not find container \"8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8\": container with ID starting with 8b8dc1984526568f389ec90003d758dacbb0941609802ea5f17275e048605bd8 not found: ID does not exist" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.870961 4812 scope.go:117] "RemoveContainer" containerID="6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797" Feb 18 16:59:13 crc kubenswrapper[4812]: E0218 16:59:13.871370 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797\": container with ID starting with 6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797 not found: ID does not exist" containerID="6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797" Feb 18 16:59:13 crc kubenswrapper[4812]: I0218 16:59:13.871417 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797"} err="failed to get container status \"6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797\": rpc error: code = NotFound desc = could not find container \"6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797\": container with ID starting with 6ec1445c139c9c7b9e9a4c4e71e45a142943e5e646c850e68a9005bad9489797 not found: ID does not exist" Feb 18 16:59:14 crc kubenswrapper[4812]: I0218 16:59:14.122806 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:59:14 crc kubenswrapper[4812]: I0218 16:59:14.132361 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-fzslp"] Feb 18 16:59:14 crc kubenswrapper[4812]: I0218 16:59:14.520977 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" path="/var/lib/kubelet/pods/e2d945fb-c61b-4cb5-965a-d2d72a9c0c02/volumes" Feb 18 16:59:15 crc kubenswrapper[4812]: I0218 16:59:15.508402 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:59:15 crc kubenswrapper[4812]: E0218 16:59:15.509726 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.585139 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podd16a9c1f-35a5-4a89-88eb-0c2eadba5c7c"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podd16a9c1f-35a5-4a89-88eb-0c2eadba5c7c] : Timed out while waiting for systemd to remove kubepods-burstable-podd16a9c1f_35a5_4a89_88eb_0c2eadba5c7c.slice" Feb 18 16:59:16 crc kubenswrapper[4812]: E0218 16:59:16.585200 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podd16a9c1f-35a5-4a89-88eb-0c2eadba5c7c] : unable to destroy cgroup paths for cgroup [kubepods burstable podd16a9c1f-35a5-4a89-88eb-0c2eadba5c7c] : Timed out while waiting for systemd to remove kubepods-burstable-podd16a9c1f_35a5_4a89_88eb_0c2eadba5c7c.slice" pod="openstack/rabbitmq-server-0" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.820211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.870988 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.881948 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.898589 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:59:16 crc kubenswrapper[4812]: E0218 16:59:16.899292 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="init" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899317 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="init" Feb 18 16:59:16 crc kubenswrapper[4812]: E0218 16:59:16.899388 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899398 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: E0218 16:59:16.899411 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899420 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: E0218 16:59:16.899460 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="init" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899468 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="init" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899736 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4023942-1810-40f4-90ab-8bb60749c701" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.899762 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d945fb-c61b-4cb5-965a-d2d72a9c0c02" containerName="dnsmasq-dns" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.901305 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.905309 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.905544 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.905683 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.905875 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mw59b" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.906120 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.906276 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.909719 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 16:59:16 crc kubenswrapper[4812]: I0218 16:59:16.933027 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079084 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079178 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bcd7726-b623-4b86-b8d9-391eea661d2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079245 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bcd7726-b623-4b86-b8d9-391eea661d2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kfv\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-kube-api-access-l7kfv\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079624 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.079905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181449 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181536 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kfv\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-kube-api-access-l7kfv\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181706 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181746 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.181782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.182263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.182373 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.182457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183080 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bcd7726-b623-4b86-b8d9-391eea661d2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183172 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bcd7726-b623-4b86-b8d9-391eea661d2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183532 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.183780 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.184087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.184153 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bcd7726-b623-4b86-b8d9-391eea661d2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.188160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.188594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.197044 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bcd7726-b623-4b86-b8d9-391eea661d2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.197492 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bcd7726-b623-4b86-b8d9-391eea661d2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.202392 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kfv\" (UniqueName: \"kubernetes.io/projected/3bcd7726-b623-4b86-b8d9-391eea661d2f-kube-api-access-l7kfv\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.239709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3bcd7726-b623-4b86-b8d9-391eea661d2f\") " pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.527031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 16:59:17 crc kubenswrapper[4812]: I0218 16:59:17.996323 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 16:59:18 crc kubenswrapper[4812]: I0218 16:59:18.528984 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c" path="/var/lib/kubelet/pods/d16a9c1f-35a5-4a89-88eb-0c2eadba5c7c/volumes" Feb 18 16:59:18 crc kubenswrapper[4812]: I0218 16:59:18.843286 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bcd7726-b623-4b86-b8d9-391eea661d2f","Type":"ContainerStarted","Data":"e345d8c6638ba86bb8d3a6116dfc692127fc6c97ce7d60f0c8641e168df0d8d0"} Feb 18 16:59:19 crc kubenswrapper[4812]: I0218 16:59:19.854445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bcd7726-b623-4b86-b8d9-391eea661d2f","Type":"ContainerStarted","Data":"b73a55c00d0428b7e79fa1e10e55ba7a2ba0c8353078f99a6cde7bfb74527c78"} Feb 18 16:59:22 crc kubenswrapper[4812]: I0218 16:59:22.900979 4812 generic.go:334] "Generic (PLEG): container finished" podID="6af8f1e1-753d-4010-90a4-8127e39198fa" containerID="f20844b3db5e5c3a7ad90a44a6ba5b11b47379d37df156e61bbb2359e0d72817" exitCode=0 Feb 18 16:59:22 crc kubenswrapper[4812]: I0218 16:59:22.901056 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6af8f1e1-753d-4010-90a4-8127e39198fa","Type":"ContainerDied","Data":"f20844b3db5e5c3a7ad90a44a6ba5b11b47379d37df156e61bbb2359e0d72817"} Feb 18 16:59:23 crc kubenswrapper[4812]: I0218 16:59:23.914257 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6af8f1e1-753d-4010-90a4-8127e39198fa","Type":"ContainerStarted","Data":"a51ad32cbec549a8c2096a8dd659648a3fc2517851d54595d45a0207d33d1ecf"} Feb 18 16:59:23 crc kubenswrapper[4812]: I0218 16:59:23.914771 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:59:23 crc kubenswrapper[4812]: I0218 16:59:23.948748 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.948724326 podStartE2EDuration="36.948724326s" podCreationTimestamp="2026-02-18 16:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:59:23.943430154 +0000 UTC m=+1784.209041073" watchObservedRunningTime="2026-02-18 16:59:23.948724326 +0000 UTC m=+1784.214335235" Feb 18 16:59:26 crc kubenswrapper[4812]: I0218 16:59:26.044400 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wvhjk"] Feb 18 16:59:26 crc kubenswrapper[4812]: I0218 16:59:26.055910 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wvhjk"] Feb 18 16:59:26 crc kubenswrapper[4812]: I0218 16:59:26.524042 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42be7b4-a0bc-40c3-b297-1259ce32e320" path="/var/lib/kubelet/pods/d42be7b4-a0bc-40c3-b297-1259ce32e320/volumes" Feb 18 16:59:29 crc kubenswrapper[4812]: I0218 16:59:29.508343 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:59:29 crc kubenswrapper[4812]: E0218 16:59:29.509218 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.357941 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596"] Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.359504 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.361824 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.363656 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.365332 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.369135 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.378518 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596"] Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.419432 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.419527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.419794 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226kk\" (UniqueName: \"kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.420029 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.521835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.521992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.522048 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.522152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226kk\" (UniqueName: \"kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.528589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.535471 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.537987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.542908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226kk\" (UniqueName: \"kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vf596\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:31 crc kubenswrapper[4812]: I0218 16:59:31.684323 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:32 crc kubenswrapper[4812]: I0218 16:59:32.243613 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596"] Feb 18 16:59:32 crc kubenswrapper[4812]: W0218 16:59:32.250118 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb914cca_2704_4009_aa44_dfe3d6c00290.slice/crio-96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483 WatchSource:0}: Error finding container 96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483: Status 404 returned error can't find the container with id 96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483 Feb 18 16:59:33 crc kubenswrapper[4812]: I0218 16:59:33.000205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" event={"ID":"fb914cca-2704-4009-aa44-dfe3d6c00290","Type":"ContainerStarted","Data":"96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483"} Feb 18 16:59:37 crc kubenswrapper[4812]: I0218 16:59:37.942297 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 16:59:40 crc kubenswrapper[4812]: I0218 16:59:40.049597 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nb6kk"] Feb 18 16:59:40 crc kubenswrapper[4812]: I0218 16:59:40.060939 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nb6kk"] Feb 18 16:59:40 crc kubenswrapper[4812]: I0218 16:59:40.515694 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:59:40 crc kubenswrapper[4812]: E0218 16:59:40.515951 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:59:40 crc kubenswrapper[4812]: I0218 16:59:40.548193 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1406358-077f-4147-9645-a0492308800c" path="/var/lib/kubelet/pods/d1406358-077f-4147-9645-a0492308800c/volumes" Feb 18 16:59:44 crc kubenswrapper[4812]: I0218 16:59:44.026614 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8fm48"] Feb 18 16:59:44 crc kubenswrapper[4812]: I0218 16:59:44.037352 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8fm48"] Feb 18 16:59:44 crc kubenswrapper[4812]: I0218 16:59:44.630259 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350d667f-d6e0-4c3f-b5c0-91c11a0aafcb" path="/var/lib/kubelet/pods/350d667f-d6e0-4c3f-b5c0-91c11a0aafcb/volumes" Feb 18 16:59:46 crc kubenswrapper[4812]: I0218 16:59:46.802002 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 16:59:47 crc kubenswrapper[4812]: I0218 16:59:47.034977 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-cwvf6"] Feb 18 16:59:47 crc kubenswrapper[4812]: I0218 16:59:47.045905 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-cwvf6"] Feb 18 16:59:47 crc kubenswrapper[4812]: I0218 16:59:47.141772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" event={"ID":"fb914cca-2704-4009-aa44-dfe3d6c00290","Type":"ContainerStarted","Data":"e37f4ab1fe79cbd6e2e6625857ae5d9f4a000a9907fe04b6878520389523c565"} Feb 18 16:59:47 crc kubenswrapper[4812]: I0218 16:59:47.166320 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" podStartSLOduration=1.620394422 podStartE2EDuration="16.166295786s" podCreationTimestamp="2026-02-18 16:59:31 +0000 UTC" firstStartedPulling="2026-02-18 16:59:32.252519683 +0000 UTC m=+1792.518130592" lastFinishedPulling="2026-02-18 16:59:46.798421047 +0000 UTC m=+1807.064031956" observedRunningTime="2026-02-18 16:59:47.165989478 +0000 UTC m=+1807.431600387" watchObservedRunningTime="2026-02-18 16:59:47.166295786 +0000 UTC m=+1807.431906695" Feb 18 16:59:48 crc kubenswrapper[4812]: I0218 16:59:48.549172 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95955afd-adc9-44b0-93ba-4e4a63292613" path="/var/lib/kubelet/pods/95955afd-adc9-44b0-93ba-4e4a63292613/volumes" Feb 18 16:59:52 crc kubenswrapper[4812]: I0218 16:59:52.186257 4812 generic.go:334] "Generic (PLEG): container finished" podID="3bcd7726-b623-4b86-b8d9-391eea661d2f" containerID="b73a55c00d0428b7e79fa1e10e55ba7a2ba0c8353078f99a6cde7bfb74527c78" exitCode=0 Feb 18 16:59:52 crc kubenswrapper[4812]: I0218 16:59:52.186353 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bcd7726-b623-4b86-b8d9-391eea661d2f","Type":"ContainerDied","Data":"b73a55c00d0428b7e79fa1e10e55ba7a2ba0c8353078f99a6cde7bfb74527c78"} Feb 18 16:59:53 crc kubenswrapper[4812]: I0218 16:59:53.202740 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bcd7726-b623-4b86-b8d9-391eea661d2f","Type":"ContainerStarted","Data":"8989aa4b0453778578c52d189cc88f978f41cefa905573db66d574d7dd56099d"} Feb 18 16:59:53 crc kubenswrapper[4812]: I0218 16:59:53.203300 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 16:59:54 crc kubenswrapper[4812]: I0218 16:59:54.020519 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.020502066 podStartE2EDuration="38.020502066s" podCreationTimestamp="2026-02-18 16:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 16:59:53.227415255 +0000 UTC m=+1813.493026154" watchObservedRunningTime="2026-02-18 16:59:54.020502066 +0000 UTC m=+1814.286112965" Feb 18 16:59:54 crc kubenswrapper[4812]: I0218 16:59:54.029457 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-70ea-account-create-update-z8dpj"] Feb 18 16:59:54 crc kubenswrapper[4812]: I0218 16:59:54.039135 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-70ea-account-create-update-z8dpj"] Feb 18 16:59:54 crc kubenswrapper[4812]: I0218 16:59:54.519284 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bced8af6-aca7-4de1-96a8-40c4c31a8168" path="/var/lib/kubelet/pods/bced8af6-aca7-4de1-96a8-40c4c31a8168/volumes" Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.035379 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-addf-account-create-update-ltxgg"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.044615 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a8b1-account-create-update-785ct"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.058805 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-52dc-account-create-update-s4cjj"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.068033 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-52dc-account-create-update-s4cjj"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.078161 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-addf-account-create-update-ltxgg"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.088233 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a8b1-account-create-update-785ct"] Feb 18 16:59:55 crc kubenswrapper[4812]: I0218 16:59:55.509194 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 16:59:55 crc kubenswrapper[4812]: E0218 16:59:55.509621 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 16:59:56 crc kubenswrapper[4812]: I0218 16:59:56.519360 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93bd63e5-c276-43f0-8650-bf74a32c7e7f" path="/var/lib/kubelet/pods/93bd63e5-c276-43f0-8650-bf74a32c7e7f/volumes" Feb 18 16:59:56 crc kubenswrapper[4812]: I0218 16:59:56.521243 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95504d5b-50f1-436c-a2fe-21835f70912e" path="/var/lib/kubelet/pods/95504d5b-50f1-436c-a2fe-21835f70912e/volumes" Feb 18 16:59:56 crc kubenswrapper[4812]: I0218 16:59:56.527811 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdda65b0-3132-4e95-a32e-d5772f7f1354" path="/var/lib/kubelet/pods/bdda65b0-3132-4e95-a32e-d5772f7f1354/volumes" Feb 18 16:59:58 crc kubenswrapper[4812]: I0218 16:59:58.261354 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb914cca-2704-4009-aa44-dfe3d6c00290" containerID="e37f4ab1fe79cbd6e2e6625857ae5d9f4a000a9907fe04b6878520389523c565" exitCode=0 Feb 18 16:59:58 crc kubenswrapper[4812]: I0218 16:59:58.261445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" event={"ID":"fb914cca-2704-4009-aa44-dfe3d6c00290","Type":"ContainerDied","Data":"e37f4ab1fe79cbd6e2e6625857ae5d9f4a000a9907fe04b6878520389523c565"} Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.788153 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.912446 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle\") pod \"fb914cca-2704-4009-aa44-dfe3d6c00290\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.912492 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-226kk\" (UniqueName: \"kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk\") pod \"fb914cca-2704-4009-aa44-dfe3d6c00290\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.912565 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam\") pod \"fb914cca-2704-4009-aa44-dfe3d6c00290\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.912642 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory\") pod \"fb914cca-2704-4009-aa44-dfe3d6c00290\" (UID: \"fb914cca-2704-4009-aa44-dfe3d6c00290\") " Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.918642 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fb914cca-2704-4009-aa44-dfe3d6c00290" (UID: "fb914cca-2704-4009-aa44-dfe3d6c00290"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.919792 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk" (OuterVolumeSpecName: "kube-api-access-226kk") pod "fb914cca-2704-4009-aa44-dfe3d6c00290" (UID: "fb914cca-2704-4009-aa44-dfe3d6c00290"). InnerVolumeSpecName "kube-api-access-226kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.940688 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory" (OuterVolumeSpecName: "inventory") pod "fb914cca-2704-4009-aa44-dfe3d6c00290" (UID: "fb914cca-2704-4009-aa44-dfe3d6c00290"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 16:59:59 crc kubenswrapper[4812]: I0218 16:59:59.947129 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fb914cca-2704-4009-aa44-dfe3d6c00290" (UID: "fb914cca-2704-4009-aa44-dfe3d6c00290"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.015496 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.015535 4812 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.015547 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-226kk\" (UniqueName: \"kubernetes.io/projected/fb914cca-2704-4009-aa44-dfe3d6c00290-kube-api-access-226kk\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.015561 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb914cca-2704-4009-aa44-dfe3d6c00290-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.174699 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd"] Feb 18 17:00:00 crc kubenswrapper[4812]: E0218 17:00:00.175160 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb914cca-2704-4009-aa44-dfe3d6c00290" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.175180 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb914cca-2704-4009-aa44-dfe3d6c00290" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.175392 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb914cca-2704-4009-aa44-dfe3d6c00290" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.176202 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.178319 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.188178 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd"] Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.203458 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.284007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" event={"ID":"fb914cca-2704-4009-aa44-dfe3d6c00290","Type":"ContainerDied","Data":"96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483"} Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.284347 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fc05ff02818987adf4cb6c4ccec7fc4f1abc80cc78902320c757cf66cc7483" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.284144 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vf596" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.327473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9cg\" (UniqueName: \"kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.327922 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.328250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.357739 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh"] Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.359536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.361970 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.362419 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.362743 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.362935 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.372193 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh"] Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbpl\" (UniqueName: \"kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.432997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr9cg\" (UniqueName: \"kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.433246 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.441359 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.460817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr9cg\" (UniqueName: \"kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg\") pod \"collect-profiles-29523900-sgdfd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.521339 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.536300 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.536438 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbpl\" (UniqueName: \"kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.536473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.542729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.547004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.559140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbpl\" (UniqueName: \"kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b84kh\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:00 crc kubenswrapper[4812]: I0218 17:00:00.679532 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:01 crc kubenswrapper[4812]: I0218 17:00:01.017552 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd"] Feb 18 17:00:01 crc kubenswrapper[4812]: I0218 17:00:01.295168 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh"] Feb 18 17:00:01 crc kubenswrapper[4812]: I0218 17:00:01.297718 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" event={"ID":"d7ae5c37-5264-4f63-94ef-49c90413afdd","Type":"ContainerStarted","Data":"13df47ebc6312a519403e814021eae0bbfa162ef1e81023306bdb03e38e6b71c"} Feb 18 17:00:01 crc kubenswrapper[4812]: W0218 17:00:01.298036 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2aa7b7_90dd_406e_a503_b12166293cff.slice/crio-11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b WatchSource:0}: Error finding container 11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b: Status 404 returned error can't find the container with id 11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b Feb 18 17:00:02 crc kubenswrapper[4812]: I0218 17:00:02.321809 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" event={"ID":"ec2aa7b7-90dd-406e-a503-b12166293cff","Type":"ContainerStarted","Data":"221bf48906b1d1849245a1ff79598181d567d4fea954244939306c65c3c915bb"} Feb 18 17:00:02 crc kubenswrapper[4812]: I0218 17:00:02.322615 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" event={"ID":"ec2aa7b7-90dd-406e-a503-b12166293cff","Type":"ContainerStarted","Data":"11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b"} Feb 18 17:00:02 crc kubenswrapper[4812]: I0218 17:00:02.324937 4812 generic.go:334] "Generic (PLEG): container finished" podID="d7ae5c37-5264-4f63-94ef-49c90413afdd" containerID="44fe34686de6fd59e43deb5a8cd5bca0eda6efffc7f4e5753a6bc704c5772ac9" exitCode=0 Feb 18 17:00:02 crc kubenswrapper[4812]: I0218 17:00:02.324989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" event={"ID":"d7ae5c37-5264-4f63-94ef-49c90413afdd","Type":"ContainerDied","Data":"44fe34686de6fd59e43deb5a8cd5bca0eda6efffc7f4e5753a6bc704c5772ac9"} Feb 18 17:00:02 crc kubenswrapper[4812]: I0218 17:00:02.352029 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" podStartSLOduration=1.8090732360000001 podStartE2EDuration="2.352006543s" podCreationTimestamp="2026-02-18 17:00:00 +0000 UTC" firstStartedPulling="2026-02-18 17:00:01.300348509 +0000 UTC m=+1821.565959418" lastFinishedPulling="2026-02-18 17:00:01.843281816 +0000 UTC m=+1822.108892725" observedRunningTime="2026-02-18 17:00:02.342371373 +0000 UTC m=+1822.607982292" watchObservedRunningTime="2026-02-18 17:00:02.352006543 +0000 UTC m=+1822.617617452" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.708121 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.821675 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume\") pod \"d7ae5c37-5264-4f63-94ef-49c90413afdd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.821882 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume\") pod \"d7ae5c37-5264-4f63-94ef-49c90413afdd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.822004 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr9cg\" (UniqueName: \"kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg\") pod \"d7ae5c37-5264-4f63-94ef-49c90413afdd\" (UID: \"d7ae5c37-5264-4f63-94ef-49c90413afdd\") " Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.822585 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7ae5c37-5264-4f63-94ef-49c90413afdd" (UID: "d7ae5c37-5264-4f63-94ef-49c90413afdd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.828340 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7ae5c37-5264-4f63-94ef-49c90413afdd" (UID: "d7ae5c37-5264-4f63-94ef-49c90413afdd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.828373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg" (OuterVolumeSpecName: "kube-api-access-pr9cg") pod "d7ae5c37-5264-4f63-94ef-49c90413afdd" (UID: "d7ae5c37-5264-4f63-94ef-49c90413afdd"). InnerVolumeSpecName "kube-api-access-pr9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.924064 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7ae5c37-5264-4f63-94ef-49c90413afdd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.924095 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7ae5c37-5264-4f63-94ef-49c90413afdd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:03 crc kubenswrapper[4812]: I0218 17:00:03.924118 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr9cg\" (UniqueName: \"kubernetes.io/projected/d7ae5c37-5264-4f63-94ef-49c90413afdd-kube-api-access-pr9cg\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:04 crc kubenswrapper[4812]: I0218 17:00:04.352564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" event={"ID":"d7ae5c37-5264-4f63-94ef-49c90413afdd","Type":"ContainerDied","Data":"13df47ebc6312a519403e814021eae0bbfa162ef1e81023306bdb03e38e6b71c"} Feb 18 17:00:04 crc kubenswrapper[4812]: I0218 17:00:04.352619 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13df47ebc6312a519403e814021eae0bbfa162ef1e81023306bdb03e38e6b71c" Feb 18 17:00:04 crc kubenswrapper[4812]: I0218 17:00:04.352700 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd" Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.039129 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vpbqr"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.052670 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t7vq4"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.065296 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v9vtd"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.077857 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t7vq4"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.087818 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v9vtd"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.097056 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vpbqr"] Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.362887 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec2aa7b7-90dd-406e-a503-b12166293cff" containerID="221bf48906b1d1849245a1ff79598181d567d4fea954244939306c65c3c915bb" exitCode=0 Feb 18 17:00:05 crc kubenswrapper[4812]: I0218 17:00:05.362939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" event={"ID":"ec2aa7b7-90dd-406e-a503-b12166293cff","Type":"ContainerDied","Data":"221bf48906b1d1849245a1ff79598181d567d4fea954244939306c65c3c915bb"} Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.028817 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4f96-account-create-update-7qlk9"] Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.039453 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4f96-account-create-update-7qlk9"] Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.527271 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52149a39-a534-41cf-aa43-7965aa140ad3" path="/var/lib/kubelet/pods/52149a39-a534-41cf-aa43-7965aa140ad3/volumes" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.530540 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a471a58-4811-48ef-81c1-e1505df74e97" path="/var/lib/kubelet/pods/6a471a58-4811-48ef-81c1-e1505df74e97/volumes" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.531387 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89791338-9ae7-471e-aa30-bff3c2438a5c" path="/var/lib/kubelet/pods/89791338-9ae7-471e-aa30-bff3c2438a5c/volumes" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.532062 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1d4f41-dea7-4312-886c-b8c731ed5094" path="/var/lib/kubelet/pods/9a1d4f41-dea7-4312-886c-b8c731ed5094/volumes" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.900011 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.910645 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam\") pod \"ec2aa7b7-90dd-406e-a503-b12166293cff\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.911013 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbpl\" (UniqueName: \"kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl\") pod \"ec2aa7b7-90dd-406e-a503-b12166293cff\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.911225 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory\") pod \"ec2aa7b7-90dd-406e-a503-b12166293cff\" (UID: \"ec2aa7b7-90dd-406e-a503-b12166293cff\") " Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.921326 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl" (OuterVolumeSpecName: "kube-api-access-vsbpl") pod "ec2aa7b7-90dd-406e-a503-b12166293cff" (UID: "ec2aa7b7-90dd-406e-a503-b12166293cff"). InnerVolumeSpecName "kube-api-access-vsbpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.949469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory" (OuterVolumeSpecName: "inventory") pod "ec2aa7b7-90dd-406e-a503-b12166293cff" (UID: "ec2aa7b7-90dd-406e-a503-b12166293cff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:00:06 crc kubenswrapper[4812]: I0218 17:00:06.953490 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec2aa7b7-90dd-406e-a503-b12166293cff" (UID: "ec2aa7b7-90dd-406e-a503-b12166293cff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.012905 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbpl\" (UniqueName: \"kubernetes.io/projected/ec2aa7b7-90dd-406e-a503-b12166293cff-kube-api-access-vsbpl\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.012982 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.012999 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec2aa7b7-90dd-406e-a503-b12166293cff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.408224 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" event={"ID":"ec2aa7b7-90dd-406e-a503-b12166293cff","Type":"ContainerDied","Data":"11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b"} Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.408272 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c6c6eccd7eb9e5d6d395151165a2d16f0a54c4b532ae8c64f16811ec0a059b" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.408275 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b84kh" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.478417 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n"] Feb 18 17:00:07 crc kubenswrapper[4812]: E0218 17:00:07.479015 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2aa7b7-90dd-406e-a503-b12166293cff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.479044 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2aa7b7-90dd-406e-a503-b12166293cff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:07 crc kubenswrapper[4812]: E0218 17:00:07.479097 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ae5c37-5264-4f63-94ef-49c90413afdd" containerName="collect-profiles" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.479128 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ae5c37-5264-4f63-94ef-49c90413afdd" containerName="collect-profiles" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.479361 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ae5c37-5264-4f63-94ef-49c90413afdd" containerName="collect-profiles" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.479390 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2aa7b7-90dd-406e-a503-b12166293cff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.480249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.482461 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.482832 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.483028 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.483599 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.511868 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n"] Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.522339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlh8b\" (UniqueName: \"kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.522503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.522565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.522879 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.531282 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.624949 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlh8b\" (UniqueName: \"kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.625043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.625119 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.625213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.632454 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.639642 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.640139 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.662042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlh8b\" (UniqueName: \"kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:07 crc kubenswrapper[4812]: I0218 17:00:07.799393 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:00:08 crc kubenswrapper[4812]: I0218 17:00:08.391032 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n"] Feb 18 17:00:08 crc kubenswrapper[4812]: I0218 17:00:08.418128 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" event={"ID":"ed69aece-4a9c-4e29-a245-b31c021bbca6","Type":"ContainerStarted","Data":"d073e71cf91938519d3d8103be62acafd56b1b266dfa7cc579327ebc1fa28079"} Feb 18 17:00:08 crc kubenswrapper[4812]: I0218 17:00:08.509288 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 17:00:08 crc kubenswrapper[4812]: E0218 17:00:08.509575 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:00:09 crc kubenswrapper[4812]: I0218 17:00:09.428316 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" event={"ID":"ed69aece-4a9c-4e29-a245-b31c021bbca6","Type":"ContainerStarted","Data":"5aba22c3dc0d0681a7458f667376abbeebe25854f6160892f40c25e43fec271c"} Feb 18 17:00:12 crc kubenswrapper[4812]: I0218 17:00:12.890628 4812 scope.go:117] "RemoveContainer" containerID="6305bb6714308dd574764743da30b170bc4f215cb7484c891ea63f70d587c7c2" Feb 18 17:00:12 crc kubenswrapper[4812]: I0218 17:00:12.918629 4812 scope.go:117] "RemoveContainer" containerID="4b02ebf347fa9cdb83c16444b2c3f6104392031087d2ed9c5e46c43896c6e1eb" Feb 18 17:00:12 crc kubenswrapper[4812]: I0218 17:00:12.968335 4812 scope.go:117] "RemoveContainer" containerID="7e1a476320d946d03983a48bdfcb6820db6d788dd7ca4b12d685550a08a3339e" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.013975 4812 scope.go:117] "RemoveContainer" containerID="3107070e262ee24fba9777fd7c59e2932babf6fb5ff435a936fc7fb668d8889b" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.064145 4812 scope.go:117] "RemoveContainer" containerID="37da6ef8c87cca73941fecd66b0ed1baaf0ad94244ea8a393850f3ae4e94ecde" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.148552 4812 scope.go:117] "RemoveContainer" containerID="0c79e9ea8de5d888fc7df23060fca302a03ec1ecf4c2e8595fa78b2627794f83" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.219804 4812 scope.go:117] "RemoveContainer" containerID="76f5f5cf7144824215bb8ea652fef920830a808e9561f8ea3d472c5a7e3e2b06" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.255602 4812 scope.go:117] "RemoveContainer" containerID="a4abc955e3bc6a488c15d682ce968563418cef9d6d03ed09ed36aa8add2a256b" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.292782 4812 scope.go:117] "RemoveContainer" containerID="f392d4ed8aacf89d1d3efe6b34c16a0fc868cfb2b0f2281388d13c544c4cfcbd" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.321600 4812 scope.go:117] "RemoveContainer" containerID="3b901bed95ae4e36fb7cc344c7019186accebc442efe917f5165d5252b2e2f32" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.345018 4812 scope.go:117] "RemoveContainer" containerID="b821240bcd8ce78f3f661ba0304c3874a67ffe6291a5b8d2a13de148be49a7a6" Feb 18 17:00:13 crc kubenswrapper[4812]: I0218 17:00:13.383784 4812 scope.go:117] "RemoveContainer" containerID="41b151232d544a4490fab42c933f3b203a98fa585d05bf18d3c177a3cc201723" Feb 18 17:00:19 crc kubenswrapper[4812]: I0218 17:00:19.038044 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" podStartSLOduration=11.485800631 podStartE2EDuration="12.038021448s" podCreationTimestamp="2026-02-18 17:00:07 +0000 UTC" firstStartedPulling="2026-02-18 17:00:08.398228222 +0000 UTC m=+1828.663839131" lastFinishedPulling="2026-02-18 17:00:08.950449029 +0000 UTC m=+1829.216059948" observedRunningTime="2026-02-18 17:00:09.452796958 +0000 UTC m=+1829.718407887" watchObservedRunningTime="2026-02-18 17:00:19.038021448 +0000 UTC m=+1839.303632357" Feb 18 17:00:19 crc kubenswrapper[4812]: I0218 17:00:19.046937 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c59-account-create-update-qjx5n"] Feb 18 17:00:19 crc kubenswrapper[4812]: I0218 17:00:19.057414 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c59-account-create-update-qjx5n"] Feb 18 17:00:19 crc kubenswrapper[4812]: I0218 17:00:19.508944 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 17:00:19 crc kubenswrapper[4812]: E0218 17:00:19.509497 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:00:20 crc kubenswrapper[4812]: I0218 17:00:20.519976 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23517ff2-7d34-4754-9b30-f4948ae6b681" path="/var/lib/kubelet/pods/23517ff2-7d34-4754-9b30-f4948ae6b681/volumes" Feb 18 17:00:33 crc kubenswrapper[4812]: I0218 17:00:33.508077 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 17:00:34 crc kubenswrapper[4812]: I0218 17:00:34.857192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b"} Feb 18 17:00:45 crc kubenswrapper[4812]: I0218 17:00:45.054511 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5934-account-create-update-kslhk"] Feb 18 17:00:45 crc kubenswrapper[4812]: I0218 17:00:45.064179 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5934-account-create-update-kslhk"] Feb 18 17:00:46 crc kubenswrapper[4812]: I0218 17:00:46.519027 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5618c13-9e4f-409f-bf33-ce07c822b609" path="/var/lib/kubelet/pods/c5618c13-9e4f-409f-bf33-ce07c822b609/volumes" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.173873 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523901-8rkjl"] Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.176603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.195864 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523901-8rkjl"] Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.285954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.286001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.286053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bk9t\" (UniqueName: \"kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.286075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.387456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bk9t\" (UniqueName: \"kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.387507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.387678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.387704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.396469 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.397962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.398313 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.407145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bk9t\" (UniqueName: \"kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t\") pod \"keystone-cron-29523901-8rkjl\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:00 crc kubenswrapper[4812]: I0218 17:01:00.503298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:01 crc kubenswrapper[4812]: I0218 17:01:01.023897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523901-8rkjl"] Feb 18 17:01:01 crc kubenswrapper[4812]: I0218 17:01:01.165278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523901-8rkjl" event={"ID":"8e55d885-ab77-4a7f-a3ea-085212e6fb6c","Type":"ContainerStarted","Data":"c84b67ad7ffdb314fb8c8ab4655d8a8311aacaa934f0ac21a74e93f28c32cf64"} Feb 18 17:01:02 crc kubenswrapper[4812]: I0218 17:01:02.175866 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523901-8rkjl" event={"ID":"8e55d885-ab77-4a7f-a3ea-085212e6fb6c","Type":"ContainerStarted","Data":"16c0ef8bd8525076202d3351a9c29e050a453d8b37a0860ac782bdc52839dfe1"} Feb 18 17:01:02 crc kubenswrapper[4812]: I0218 17:01:02.203517 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523901-8rkjl" podStartSLOduration=2.203494647 podStartE2EDuration="2.203494647s" podCreationTimestamp="2026-02-18 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 17:01:02.194633817 +0000 UTC m=+1882.460244726" watchObservedRunningTime="2026-02-18 17:01:02.203494647 +0000 UTC m=+1882.469105556" Feb 18 17:01:05 crc kubenswrapper[4812]: I0218 17:01:05.207042 4812 generic.go:334] "Generic (PLEG): container finished" podID="8e55d885-ab77-4a7f-a3ea-085212e6fb6c" containerID="16c0ef8bd8525076202d3351a9c29e050a453d8b37a0860ac782bdc52839dfe1" exitCode=0 Feb 18 17:01:05 crc kubenswrapper[4812]: I0218 17:01:05.207214 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523901-8rkjl" event={"ID":"8e55d885-ab77-4a7f-a3ea-085212e6fb6c","Type":"ContainerDied","Data":"16c0ef8bd8525076202d3351a9c29e050a453d8b37a0860ac782bdc52839dfe1"} Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.682557 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.818399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data\") pod \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.818553 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bk9t\" (UniqueName: \"kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t\") pod \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.818593 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys\") pod \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.818646 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle\") pod \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\" (UID: \"8e55d885-ab77-4a7f-a3ea-085212e6fb6c\") " Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.824156 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t" (OuterVolumeSpecName: "kube-api-access-2bk9t") pod "8e55d885-ab77-4a7f-a3ea-085212e6fb6c" (UID: "8e55d885-ab77-4a7f-a3ea-085212e6fb6c"). InnerVolumeSpecName "kube-api-access-2bk9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.824677 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8e55d885-ab77-4a7f-a3ea-085212e6fb6c" (UID: "8e55d885-ab77-4a7f-a3ea-085212e6fb6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.852700 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e55d885-ab77-4a7f-a3ea-085212e6fb6c" (UID: "8e55d885-ab77-4a7f-a3ea-085212e6fb6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.885115 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data" (OuterVolumeSpecName: "config-data") pod "8e55d885-ab77-4a7f-a3ea-085212e6fb6c" (UID: "8e55d885-ab77-4a7f-a3ea-085212e6fb6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.921425 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.921464 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bk9t\" (UniqueName: \"kubernetes.io/projected/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-kube-api-access-2bk9t\") on node \"crc\" DevicePath \"\"" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.921474 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 17:01:06 crc kubenswrapper[4812]: I0218 17:01:06.921482 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e55d885-ab77-4a7f-a3ea-085212e6fb6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:01:07 crc kubenswrapper[4812]: I0218 17:01:07.229124 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523901-8rkjl" event={"ID":"8e55d885-ab77-4a7f-a3ea-085212e6fb6c","Type":"ContainerDied","Data":"c84b67ad7ffdb314fb8c8ab4655d8a8311aacaa934f0ac21a74e93f28c32cf64"} Feb 18 17:01:07 crc kubenswrapper[4812]: I0218 17:01:07.229166 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523901-8rkjl" Feb 18 17:01:07 crc kubenswrapper[4812]: I0218 17:01:07.229171 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c84b67ad7ffdb314fb8c8ab4655d8a8311aacaa934f0ac21a74e93f28c32cf64" Feb 18 17:01:13 crc kubenswrapper[4812]: I0218 17:01:13.824471 4812 scope.go:117] "RemoveContainer" containerID="b7d9e889799f584b66beb601d181d89642c40a247ba3583e0ea8d44f42320393" Feb 18 17:01:13 crc kubenswrapper[4812]: I0218 17:01:13.854176 4812 scope.go:117] "RemoveContainer" containerID="a1feac6fea12bef17ae52bd3e11310bf26dc4cf7afa154b7dd64e02cabc50769" Feb 18 17:01:58 crc kubenswrapper[4812]: I0218 17:01:58.048461 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-jvwjp"] Feb 18 17:01:58 crc kubenswrapper[4812]: I0218 17:01:58.059240 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-jvwjp"] Feb 18 17:01:58 crc kubenswrapper[4812]: I0218 17:01:58.520319 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01adb9d2-b3f9-453d-b8a9-d5811235140c" path="/var/lib/kubelet/pods/01adb9d2-b3f9-453d-b8a9-d5811235140c/volumes" Feb 18 17:02:01 crc kubenswrapper[4812]: I0218 17:02:01.034955 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rwlqt"] Feb 18 17:02:01 crc kubenswrapper[4812]: I0218 17:02:01.047404 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rwlqt"] Feb 18 17:02:02 crc kubenswrapper[4812]: I0218 17:02:02.533960 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f519d561-ebbc-4aff-8d2e-6b98630f5e5f" path="/var/lib/kubelet/pods/f519d561-ebbc-4aff-8d2e-6b98630f5e5f/volumes" Feb 18 17:02:13 crc kubenswrapper[4812]: I0218 17:02:13.963631 4812 scope.go:117] "RemoveContainer" containerID="f44243636fa1bd97833760b861191926e381be5f94d007dbc5c0f356dd69195b" Feb 18 17:02:13 crc kubenswrapper[4812]: I0218 17:02:13.993613 4812 scope.go:117] "RemoveContainer" containerID="c2d1ec85aca310e71011839bc3d4e94b3cfe300eebfac4c1f2dd1dbc5f4eabe5" Feb 18 17:02:14 crc kubenswrapper[4812]: I0218 17:02:14.026933 4812 scope.go:117] "RemoveContainer" containerID="9c540727aadab60c26fe77c651ddf1e9610afb24f06c7ddf6ce2ccb5ea15f6c0" Feb 18 17:02:14 crc kubenswrapper[4812]: I0218 17:02:14.084454 4812 scope.go:117] "RemoveContainer" containerID="be3993d206fdaa57d763ab9baae8d52dd49332adf8f31b145e77f3a9803670d6" Feb 18 17:02:33 crc kubenswrapper[4812]: I0218 17:02:33.457247 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:02:33 crc kubenswrapper[4812]: I0218 17:02:33.457889 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:03:03 crc kubenswrapper[4812]: I0218 17:03:03.414253 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:03:03 crc kubenswrapper[4812]: I0218 17:03:03.414913 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:03:14 crc kubenswrapper[4812]: I0218 17:03:14.178742 4812 scope.go:117] "RemoveContainer" containerID="8fa0c14188506757b437bf215dca45319d1ba04309eff2d29c1e38f922bd12b8" Feb 18 17:03:14 crc kubenswrapper[4812]: I0218 17:03:14.208275 4812 scope.go:117] "RemoveContainer" containerID="302696c45a0576987df0b9f5c9ac465f28ba3cb58b5cdc9e34509c0a4c0ee658" Feb 18 17:03:22 crc kubenswrapper[4812]: I0218 17:03:22.043610 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s97ft"] Feb 18 17:03:22 crc kubenswrapper[4812]: I0218 17:03:22.055230 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s97ft"] Feb 18 17:03:22 crc kubenswrapper[4812]: I0218 17:03:22.521695 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7842051b-f5fe-4dd4-8f0d-3c5850dbf55e" path="/var/lib/kubelet/pods/7842051b-f5fe-4dd4-8f0d-3c5850dbf55e/volumes" Feb 18 17:03:23 crc kubenswrapper[4812]: I0218 17:03:23.596967 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed69aece-4a9c-4e29-a245-b31c021bbca6" containerID="5aba22c3dc0d0681a7458f667376abbeebe25854f6160892f40c25e43fec271c" exitCode=0 Feb 18 17:03:23 crc kubenswrapper[4812]: I0218 17:03:23.597125 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" event={"ID":"ed69aece-4a9c-4e29-a245-b31c021bbca6","Type":"ContainerDied","Data":"5aba22c3dc0d0681a7458f667376abbeebe25854f6160892f40c25e43fec271c"} Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.019743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.141519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle\") pod \"ed69aece-4a9c-4e29-a245-b31c021bbca6\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.141629 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam\") pod \"ed69aece-4a9c-4e29-a245-b31c021bbca6\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.141815 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory\") pod \"ed69aece-4a9c-4e29-a245-b31c021bbca6\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.141925 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlh8b\" (UniqueName: \"kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b\") pod \"ed69aece-4a9c-4e29-a245-b31c021bbca6\" (UID: \"ed69aece-4a9c-4e29-a245-b31c021bbca6\") " Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.147911 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ed69aece-4a9c-4e29-a245-b31c021bbca6" (UID: "ed69aece-4a9c-4e29-a245-b31c021bbca6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.148307 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b" (OuterVolumeSpecName: "kube-api-access-zlh8b") pod "ed69aece-4a9c-4e29-a245-b31c021bbca6" (UID: "ed69aece-4a9c-4e29-a245-b31c021bbca6"). InnerVolumeSpecName "kube-api-access-zlh8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.170643 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory" (OuterVolumeSpecName: "inventory") pod "ed69aece-4a9c-4e29-a245-b31c021bbca6" (UID: "ed69aece-4a9c-4e29-a245-b31c021bbca6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.176458 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed69aece-4a9c-4e29-a245-b31c021bbca6" (UID: "ed69aece-4a9c-4e29-a245-b31c021bbca6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.245793 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.245833 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlh8b\" (UniqueName: \"kubernetes.io/projected/ed69aece-4a9c-4e29-a245-b31c021bbca6-kube-api-access-zlh8b\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.245848 4812 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.245861 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed69aece-4a9c-4e29-a245-b31c021bbca6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.622407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" event={"ID":"ed69aece-4a9c-4e29-a245-b31c021bbca6","Type":"ContainerDied","Data":"d073e71cf91938519d3d8103be62acafd56b1b266dfa7cc579327ebc1fa28079"} Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.622449 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d073e71cf91938519d3d8103be62acafd56b1b266dfa7cc579327ebc1fa28079" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.622463 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.729330 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8"] Feb 18 17:03:25 crc kubenswrapper[4812]: E0218 17:03:25.730181 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e55d885-ab77-4a7f-a3ea-085212e6fb6c" containerName="keystone-cron" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.730204 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e55d885-ab77-4a7f-a3ea-085212e6fb6c" containerName="keystone-cron" Feb 18 17:03:25 crc kubenswrapper[4812]: E0218 17:03:25.730248 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed69aece-4a9c-4e29-a245-b31c021bbca6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.730259 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed69aece-4a9c-4e29-a245-b31c021bbca6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.730503 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed69aece-4a9c-4e29-a245-b31c021bbca6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.730545 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e55d885-ab77-4a7f-a3ea-085212e6fb6c" containerName="keystone-cron" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.731451 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.741351 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8"] Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.741873 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.742942 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.743129 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.743262 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.759469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fr6\" (UniqueName: \"kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.759811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.760264 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.862339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7fr6\" (UniqueName: \"kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.862483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.862611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.876864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.877287 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:25 crc kubenswrapper[4812]: I0218 17:03:25.880865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7fr6\" (UniqueName: \"kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:26 crc kubenswrapper[4812]: I0218 17:03:26.058732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:03:26 crc kubenswrapper[4812]: I0218 17:03:26.584858 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8"] Feb 18 17:03:26 crc kubenswrapper[4812]: I0218 17:03:26.586368 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:03:26 crc kubenswrapper[4812]: I0218 17:03:26.633460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" event={"ID":"28ecf721-9079-464c-8eb7-317ade066a09","Type":"ContainerStarted","Data":"2ea74bba244dec1f08fb5b88c2e1a9397925f730f6074ef57958521746f3a30a"} Feb 18 17:03:27 crc kubenswrapper[4812]: I0218 17:03:27.643548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" event={"ID":"28ecf721-9079-464c-8eb7-317ade066a09","Type":"ContainerStarted","Data":"a1065fe4db6715082ab8cf0f9c2fd939e230b1a6d41022cd289085912b4f8a09"} Feb 18 17:03:27 crc kubenswrapper[4812]: I0218 17:03:27.662228 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" podStartSLOduration=2.173222403 podStartE2EDuration="2.662208484s" podCreationTimestamp="2026-02-18 17:03:25 +0000 UTC" firstStartedPulling="2026-02-18 17:03:26.586036429 +0000 UTC m=+2026.851647338" lastFinishedPulling="2026-02-18 17:03:27.07502251 +0000 UTC m=+2027.340633419" observedRunningTime="2026-02-18 17:03:27.659579289 +0000 UTC m=+2027.925190198" watchObservedRunningTime="2026-02-18 17:03:27.662208484 +0000 UTC m=+2027.927819393" Feb 18 17:03:29 crc kubenswrapper[4812]: I0218 17:03:29.956961 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:29 crc kubenswrapper[4812]: I0218 17:03:29.967266 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:29 crc kubenswrapper[4812]: I0218 17:03:29.970345 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.055433 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.055718 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.055783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwvf\" (UniqueName: \"kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.157766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.157894 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.157922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwvf\" (UniqueName: \"kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.158597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.158801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.180022 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwvf\" (UniqueName: \"kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf\") pod \"community-operators-chfv8\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.297524 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:30 crc kubenswrapper[4812]: I0218 17:03:30.849483 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:30 crc kubenswrapper[4812]: W0218 17:03:30.850223 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5193859_c706_4abc_b9d2_5743f2b32462.slice/crio-805010b924701c3fb3b9e5da06cceb77b691fb661ba462917c0c6ccfaf5bb39c WatchSource:0}: Error finding container 805010b924701c3fb3b9e5da06cceb77b691fb661ba462917c0c6ccfaf5bb39c: Status 404 returned error can't find the container with id 805010b924701c3fb3b9e5da06cceb77b691fb661ba462917c0c6ccfaf5bb39c Feb 18 17:03:31 crc kubenswrapper[4812]: I0218 17:03:31.685857 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5193859-c706-4abc-b9d2-5743f2b32462" containerID="9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6" exitCode=0 Feb 18 17:03:31 crc kubenswrapper[4812]: I0218 17:03:31.686240 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerDied","Data":"9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6"} Feb 18 17:03:31 crc kubenswrapper[4812]: I0218 17:03:31.686281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerStarted","Data":"805010b924701c3fb3b9e5da06cceb77b691fb661ba462917c0c6ccfaf5bb39c"} Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.413570 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.413846 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.413893 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.414621 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.414666 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b" gracePeriod=600 Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.710216 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5193859-c706-4abc-b9d2-5743f2b32462" containerID="8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332" exitCode=0 Feb 18 17:03:33 crc kubenswrapper[4812]: I0218 17:03:33.710337 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerDied","Data":"8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332"} Feb 18 17:03:35 crc kubenswrapper[4812]: I0218 17:03:35.730924 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b" exitCode=0 Feb 18 17:03:35 crc kubenswrapper[4812]: I0218 17:03:35.730988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b"} Feb 18 17:03:35 crc kubenswrapper[4812]: I0218 17:03:35.731293 4812 scope.go:117] "RemoveContainer" containerID="8902a14dd51646842173659732b90b447a72932aa89883fe61164eddde3fd72c" Feb 18 17:03:37 crc kubenswrapper[4812]: I0218 17:03:37.048883 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rxnsr"] Feb 18 17:03:37 crc kubenswrapper[4812]: I0218 17:03:37.065185 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rxnsr"] Feb 18 17:03:37 crc kubenswrapper[4812]: I0218 17:03:37.752796 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c"} Feb 18 17:03:37 crc kubenswrapper[4812]: I0218 17:03:37.758783 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerStarted","Data":"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94"} Feb 18 17:03:37 crc kubenswrapper[4812]: I0218 17:03:37.798431 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chfv8" podStartSLOduration=3.7030245109999997 podStartE2EDuration="8.798402944s" podCreationTimestamp="2026-02-18 17:03:29 +0000 UTC" firstStartedPulling="2026-02-18 17:03:31.689394921 +0000 UTC m=+2031.955005840" lastFinishedPulling="2026-02-18 17:03:36.784773364 +0000 UTC m=+2037.050384273" observedRunningTime="2026-02-18 17:03:37.792787874 +0000 UTC m=+2038.058398793" watchObservedRunningTime="2026-02-18 17:03:37.798402944 +0000 UTC m=+2038.064013853" Feb 18 17:03:38 crc kubenswrapper[4812]: I0218 17:03:38.518792 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61496010-8bfd-4169-b604-2d595bfc2bf1" path="/var/lib/kubelet/pods/61496010-8bfd-4169-b604-2d595bfc2bf1/volumes" Feb 18 17:03:40 crc kubenswrapper[4812]: I0218 17:03:40.298120 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:40 crc kubenswrapper[4812]: I0218 17:03:40.298489 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:40 crc kubenswrapper[4812]: I0218 17:03:40.346311 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:50 crc kubenswrapper[4812]: I0218 17:03:50.360688 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:50 crc kubenswrapper[4812]: I0218 17:03:50.417431 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:50 crc kubenswrapper[4812]: I0218 17:03:50.882771 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chfv8" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="registry-server" containerID="cri-o://2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94" gracePeriod=2 Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.341173 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.436742 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities\") pod \"e5193859-c706-4abc-b9d2-5743f2b32462\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.436877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content\") pod \"e5193859-c706-4abc-b9d2-5743f2b32462\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.437065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwvf\" (UniqueName: \"kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf\") pod \"e5193859-c706-4abc-b9d2-5743f2b32462\" (UID: \"e5193859-c706-4abc-b9d2-5743f2b32462\") " Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.438478 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities" (OuterVolumeSpecName: "utilities") pod "e5193859-c706-4abc-b9d2-5743f2b32462" (UID: "e5193859-c706-4abc-b9d2-5743f2b32462"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.445508 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf" (OuterVolumeSpecName: "kube-api-access-rcwvf") pod "e5193859-c706-4abc-b9d2-5743f2b32462" (UID: "e5193859-c706-4abc-b9d2-5743f2b32462"). InnerVolumeSpecName "kube-api-access-rcwvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.492069 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5193859-c706-4abc-b9d2-5743f2b32462" (UID: "e5193859-c706-4abc-b9d2-5743f2b32462"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.539626 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.539689 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5193859-c706-4abc-b9d2-5743f2b32462-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.539704 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwvf\" (UniqueName: \"kubernetes.io/projected/e5193859-c706-4abc-b9d2-5743f2b32462-kube-api-access-rcwvf\") on node \"crc\" DevicePath \"\"" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.893073 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5193859-c706-4abc-b9d2-5743f2b32462" containerID="2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94" exitCode=0 Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.893133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerDied","Data":"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94"} Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.893169 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfv8" event={"ID":"e5193859-c706-4abc-b9d2-5743f2b32462","Type":"ContainerDied","Data":"805010b924701c3fb3b9e5da06cceb77b691fb661ba462917c0c6ccfaf5bb39c"} Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.893185 4812 scope.go:117] "RemoveContainer" containerID="2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.893179 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfv8" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.923586 4812 scope.go:117] "RemoveContainer" containerID="8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.928266 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.937332 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chfv8"] Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.945691 4812 scope.go:117] "RemoveContainer" containerID="9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.986295 4812 scope.go:117] "RemoveContainer" containerID="2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94" Feb 18 17:03:51 crc kubenswrapper[4812]: E0218 17:03:51.986756 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94\": container with ID starting with 2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94 not found: ID does not exist" containerID="2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.986808 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94"} err="failed to get container status \"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94\": rpc error: code = NotFound desc = could not find container \"2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94\": container with ID starting with 2c3e1afa336a28bd212866a25a9bebdd7341499880816f3cd2fb1f5747afac94 not found: ID does not exist" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.986844 4812 scope.go:117] "RemoveContainer" containerID="8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332" Feb 18 17:03:51 crc kubenswrapper[4812]: E0218 17:03:51.987528 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332\": container with ID starting with 8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332 not found: ID does not exist" containerID="8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.987561 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332"} err="failed to get container status \"8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332\": rpc error: code = NotFound desc = could not find container \"8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332\": container with ID starting with 8bcd897032d4ff29102b3ac131b31d95866ca7af6156c40909d3db442174a332 not found: ID does not exist" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.987588 4812 scope.go:117] "RemoveContainer" containerID="9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6" Feb 18 17:03:51 crc kubenswrapper[4812]: E0218 17:03:51.987922 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6\": container with ID starting with 9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6 not found: ID does not exist" containerID="9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6" Feb 18 17:03:51 crc kubenswrapper[4812]: I0218 17:03:51.987957 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6"} err="failed to get container status \"9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6\": rpc error: code = NotFound desc = could not find container \"9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6\": container with ID starting with 9731021b6871b27e9f300d68a3bce674689719f7c24c9a70bf7b8408794240e6 not found: ID does not exist" Feb 18 17:03:52 crc kubenswrapper[4812]: I0218 17:03:52.519468 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" path="/var/lib/kubelet/pods/e5193859-c706-4abc-b9d2-5743f2b32462/volumes" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.256298 4812 scope.go:117] "RemoveContainer" containerID="efc686d66cc92ed86b7da9d8c989ea167b2f36d347a16fbffb605fd163f56d16" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.284740 4812 scope.go:117] "RemoveContainer" containerID="18b6bb67866aa801b00340daaef8b5397548215fe9ccd04bb57d123e7c425eb2" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.332135 4812 scope.go:117] "RemoveContainer" containerID="ea42ea77027f8f17414e49487bdb145ab894717ee91c08eddb0923ba4067493c" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.349252 4812 scope.go:117] "RemoveContainer" containerID="1c05da71c3a9ba4947cfb1f95772b8b2bd3749c3d56ce7904e354c6b21450196" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.383746 4812 scope.go:117] "RemoveContainer" containerID="a163df77416a64f48a806f99ea069e2792bceb23044f8df779e0a1e0d6efec8c" Feb 18 17:04:14 crc kubenswrapper[4812]: I0218 17:04:14.423179 4812 scope.go:117] "RemoveContainer" containerID="cd4b260a85c7f7fe7ef10a6b462201c8378f854c1ea2d2a540cc0911ec87a9a3" Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.057463 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7tnx6"] Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.070605 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rtd9r"] Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.083215 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7tnx6"] Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.095676 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rtd9r"] Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.518224 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09eb0e05-320a-463b-85cd-e1e387bb2610" path="/var/lib/kubelet/pods/09eb0e05-320a-463b-85cd-e1e387bb2610/volumes" Feb 18 17:05:00 crc kubenswrapper[4812]: I0218 17:05:00.518797 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8de8dc-9b45-45b4-88bb-316168633d73" path="/var/lib/kubelet/pods/0a8de8dc-9b45-45b4-88bb-316168633d73/volumes" Feb 18 17:05:14 crc kubenswrapper[4812]: I0218 17:05:14.551932 4812 scope.go:117] "RemoveContainer" containerID="bc74b5bf4c3d20bae26ba5febbcb47a79e9df639094d0216b433df9bcb32acf0" Feb 18 17:05:14 crc kubenswrapper[4812]: I0218 17:05:14.587617 4812 scope.go:117] "RemoveContainer" containerID="974ec9d8d111fdb8c79fa08ce8f123e8211389121a03705501c003f02cab124e" Feb 18 17:05:27 crc kubenswrapper[4812]: I0218 17:05:27.107951 4812 generic.go:334] "Generic (PLEG): container finished" podID="28ecf721-9079-464c-8eb7-317ade066a09" containerID="a1065fe4db6715082ab8cf0f9c2fd939e230b1a6d41022cd289085912b4f8a09" exitCode=0 Feb 18 17:05:27 crc kubenswrapper[4812]: I0218 17:05:27.108061 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" event={"ID":"28ecf721-9079-464c-8eb7-317ade066a09","Type":"ContainerDied","Data":"a1065fe4db6715082ab8cf0f9c2fd939e230b1a6d41022cd289085912b4f8a09"} Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.635149 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.679371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory\") pod \"28ecf721-9079-464c-8eb7-317ade066a09\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.679639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam\") pod \"28ecf721-9079-464c-8eb7-317ade066a09\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.679773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7fr6\" (UniqueName: \"kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6\") pod \"28ecf721-9079-464c-8eb7-317ade066a09\" (UID: \"28ecf721-9079-464c-8eb7-317ade066a09\") " Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.685382 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6" (OuterVolumeSpecName: "kube-api-access-q7fr6") pod "28ecf721-9079-464c-8eb7-317ade066a09" (UID: "28ecf721-9079-464c-8eb7-317ade066a09"). InnerVolumeSpecName "kube-api-access-q7fr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.709767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory" (OuterVolumeSpecName: "inventory") pod "28ecf721-9079-464c-8eb7-317ade066a09" (UID: "28ecf721-9079-464c-8eb7-317ade066a09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.715113 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28ecf721-9079-464c-8eb7-317ade066a09" (UID: "28ecf721-9079-464c-8eb7-317ade066a09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.782858 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.782905 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7fr6\" (UniqueName: \"kubernetes.io/projected/28ecf721-9079-464c-8eb7-317ade066a09-kube-api-access-q7fr6\") on node \"crc\" DevicePath \"\"" Feb 18 17:05:28 crc kubenswrapper[4812]: I0218 17:05:28.782915 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ecf721-9079-464c-8eb7-317ade066a09-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.140163 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" event={"ID":"28ecf721-9079-464c-8eb7-317ade066a09","Type":"ContainerDied","Data":"2ea74bba244dec1f08fb5b88c2e1a9397925f730f6074ef57958521746f3a30a"} Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.140418 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea74bba244dec1f08fb5b88c2e1a9397925f730f6074ef57958521746f3a30a" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.140591 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.253471 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf"] Feb 18 17:05:29 crc kubenswrapper[4812]: E0218 17:05:29.253914 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ecf721-9079-464c-8eb7-317ade066a09" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.253934 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ecf721-9079-464c-8eb7-317ade066a09" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 17:05:29 crc kubenswrapper[4812]: E0218 17:05:29.253948 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="extract-content" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.253957 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="extract-content" Feb 18 17:05:29 crc kubenswrapper[4812]: E0218 17:05:29.253993 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="extract-utilities" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.254000 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="extract-utilities" Feb 18 17:05:29 crc kubenswrapper[4812]: E0218 17:05:29.254012 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="registry-server" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.254018 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="registry-server" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.254827 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5193859-c706-4abc-b9d2-5743f2b32462" containerName="registry-server" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.254856 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ecf721-9079-464c-8eb7-317ade066a09" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.255962 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.263082 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.263311 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.263506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.263562 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.276813 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf"] Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.311779 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.311868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd8n\" (UniqueName: \"kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.311906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.413944 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.414035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd8n\" (UniqueName: \"kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.414067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.418636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.418941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.433041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd8n\" (UniqueName: \"kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:29 crc kubenswrapper[4812]: I0218 17:05:29.628319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:05:30 crc kubenswrapper[4812]: I0218 17:05:30.202064 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf"] Feb 18 17:05:31 crc kubenswrapper[4812]: I0218 17:05:31.157919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" event={"ID":"f98d198a-6397-422d-b0d0-0ec0d74e7f83","Type":"ContainerStarted","Data":"900bc2dbcbbfd4953f123819649d0acce2ed642d26c5566ce8bc35c0ae9ad86e"} Feb 18 17:05:31 crc kubenswrapper[4812]: I0218 17:05:31.158432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" event={"ID":"f98d198a-6397-422d-b0d0-0ec0d74e7f83","Type":"ContainerStarted","Data":"ad70ea92d61a59a2534ef26586bd7a994a62b338876dae53bd72fce51eeb40c3"} Feb 18 17:05:31 crc kubenswrapper[4812]: I0218 17:05:31.193736 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" podStartSLOduration=1.7854815400000001 podStartE2EDuration="2.193711951s" podCreationTimestamp="2026-02-18 17:05:29 +0000 UTC" firstStartedPulling="2026-02-18 17:05:30.210920669 +0000 UTC m=+2150.476531578" lastFinishedPulling="2026-02-18 17:05:30.61915108 +0000 UTC m=+2150.884761989" observedRunningTime="2026-02-18 17:05:31.188395648 +0000 UTC m=+2151.454006557" watchObservedRunningTime="2026-02-18 17:05:31.193711951 +0000 UTC m=+2151.459322870" Feb 18 17:05:35 crc kubenswrapper[4812]: I0218 17:05:35.046065 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tzm7v"] Feb 18 17:05:35 crc kubenswrapper[4812]: I0218 17:05:35.054037 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tzm7v"] Feb 18 17:05:36 crc kubenswrapper[4812]: I0218 17:05:36.521794 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32f52a7-3dab-42c3-b32d-ae230861ae69" path="/var/lib/kubelet/pods/c32f52a7-3dab-42c3-b32d-ae230861ae69/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.046187 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-28pd7"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.061491 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-65dc9"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.073315 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-56e6-account-create-update-txtgr"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.086435 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b9a0-account-create-update-q27jl"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.093717 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-87be-account-create-update-s5bmn"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.102377 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-28pd7"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.109714 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b9a0-account-create-update-q27jl"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.116744 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-65dc9"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.124158 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-56e6-account-create-update-txtgr"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.131701 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-87be-account-create-update-s5bmn"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.139220 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qnfpn"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.146337 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qnfpn"] Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.535749 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410f0c5b-7215-49bb-a9b1-cff11edd203c" path="/var/lib/kubelet/pods/410f0c5b-7215-49bb-a9b1-cff11edd203c/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.537507 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d4c084-1ec7-4443-862a-a0c1087440dc" path="/var/lib/kubelet/pods/48d4c084-1ec7-4443-862a-a0c1087440dc/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.538493 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87284460-f935-40ef-b594-411190374f3a" path="/var/lib/kubelet/pods/87284460-f935-40ef-b594-411190374f3a/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.539457 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4" path="/var/lib/kubelet/pods/901f24ea-a5b3-4034-a6ee-14b0e5fc5ef4/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.541483 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda780a7-35ad-48e5-b5fb-f37a6f169769" path="/var/lib/kubelet/pods/cda780a7-35ad-48e5-b5fb-f37a6f169769/volumes" Feb 18 17:05:40 crc kubenswrapper[4812]: I0218 17:05:40.542569 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7065b61-1ff1-499d-8a26-0e5597389444" path="/var/lib/kubelet/pods/f7065b61-1ff1-499d-8a26-0e5597389444/volumes" Feb 18 17:05:44 crc kubenswrapper[4812]: I0218 17:05:44.030611 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-284h9"] Feb 18 17:05:44 crc kubenswrapper[4812]: I0218 17:05:44.070930 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-284h9"] Feb 18 17:05:44 crc kubenswrapper[4812]: I0218 17:05:44.520223 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b87b144-e1c5-4d51-b6f1-6896913188d1" path="/var/lib/kubelet/pods/4b87b144-e1c5-4d51-b6f1-6896913188d1/volumes" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.305298 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.309203 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.322501 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.441772 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.441817 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.441875 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2p77\" (UniqueName: \"kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.544256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.544303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.544349 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2p77\" (UniqueName: \"kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.544827 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.544950 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.564587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2p77\" (UniqueName: \"kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77\") pod \"redhat-operators-xw758\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:53 crc kubenswrapper[4812]: I0218 17:05:53.632183 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:05:54 crc kubenswrapper[4812]: I0218 17:05:54.088043 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:05:54 crc kubenswrapper[4812]: I0218 17:05:54.406829 4812 generic.go:334] "Generic (PLEG): container finished" podID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerID="0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330" exitCode=0 Feb 18 17:05:54 crc kubenswrapper[4812]: I0218 17:05:54.406937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerDied","Data":"0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330"} Feb 18 17:05:54 crc kubenswrapper[4812]: I0218 17:05:54.407160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerStarted","Data":"02d65c5802d097f6a568654d5ef0e4266b173bd82dcd13df56980efa1cc90f30"} Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.426826 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerStarted","Data":"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7"} Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.885408 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.887525 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.908560 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.909163 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5cp4\" (UniqueName: \"kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.910793 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:05:56 crc kubenswrapper[4812]: I0218 17:05:56.910902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.012188 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.012307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5cp4\" (UniqueName: \"kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.012379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.012716 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.012921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.038011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5cp4\" (UniqueName: \"kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4\") pod \"certified-operators-bswxx\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.207652 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:05:57 crc kubenswrapper[4812]: I0218 17:05:57.711488 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:05:58 crc kubenswrapper[4812]: I0218 17:05:58.466195 4812 generic.go:334] "Generic (PLEG): container finished" podID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerID="bdd1ba1eba7ad3db9124ec7429a72ace779278b8bbaab965e4698e19f85343d7" exitCode=0 Feb 18 17:05:58 crc kubenswrapper[4812]: I0218 17:05:58.466254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerDied","Data":"bdd1ba1eba7ad3db9124ec7429a72ace779278b8bbaab965e4698e19f85343d7"} Feb 18 17:05:58 crc kubenswrapper[4812]: I0218 17:05:58.466608 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerStarted","Data":"9b24f08398864753ba7777a783b418108870ecbe16910b46ed965e4335597593"} Feb 18 17:06:00 crc kubenswrapper[4812]: I0218 17:06:00.491402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerStarted","Data":"905779fe15bb6e56a70d6fbfcabf68e090754a588c33d0c8ab25c56c1fcbc5cf"} Feb 18 17:06:00 crc kubenswrapper[4812]: I0218 17:06:00.494679 4812 generic.go:334] "Generic (PLEG): container finished" podID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerID="1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7" exitCode=0 Feb 18 17:06:00 crc kubenswrapper[4812]: I0218 17:06:00.494737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerDied","Data":"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7"} Feb 18 17:06:01 crc kubenswrapper[4812]: E0218 17:06:01.149850 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f701e3_cc93_4788_8734_3a3d84c0b573.slice/crio-conmon-905779fe15bb6e56a70d6fbfcabf68e090754a588c33d0c8ab25c56c1fcbc5cf.scope\": RecentStats: unable to find data in memory cache]" Feb 18 17:06:01 crc kubenswrapper[4812]: I0218 17:06:01.506117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerStarted","Data":"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2"} Feb 18 17:06:01 crc kubenswrapper[4812]: I0218 17:06:01.509362 4812 generic.go:334] "Generic (PLEG): container finished" podID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerID="905779fe15bb6e56a70d6fbfcabf68e090754a588c33d0c8ab25c56c1fcbc5cf" exitCode=0 Feb 18 17:06:01 crc kubenswrapper[4812]: I0218 17:06:01.509408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerDied","Data":"905779fe15bb6e56a70d6fbfcabf68e090754a588c33d0c8ab25c56c1fcbc5cf"} Feb 18 17:06:01 crc kubenswrapper[4812]: I0218 17:06:01.548703 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xw758" podStartSLOduration=1.692719544 podStartE2EDuration="8.548684529s" podCreationTimestamp="2026-02-18 17:05:53 +0000 UTC" firstStartedPulling="2026-02-18 17:05:54.408738737 +0000 UTC m=+2174.674349646" lastFinishedPulling="2026-02-18 17:06:01.264703722 +0000 UTC m=+2181.530314631" observedRunningTime="2026-02-18 17:06:01.541398718 +0000 UTC m=+2181.807009627" watchObservedRunningTime="2026-02-18 17:06:01.548684529 +0000 UTC m=+2181.814295438" Feb 18 17:06:02 crc kubenswrapper[4812]: I0218 17:06:02.522952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerStarted","Data":"4d187ba39169f3170948054b96b9f447ee320e7d1d8275cfe058bf22775df9d9"} Feb 18 17:06:02 crc kubenswrapper[4812]: I0218 17:06:02.548588 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bswxx" podStartSLOduration=3.07098234 podStartE2EDuration="6.548568347s" podCreationTimestamp="2026-02-18 17:05:56 +0000 UTC" firstStartedPulling="2026-02-18 17:05:58.469854258 +0000 UTC m=+2178.735465167" lastFinishedPulling="2026-02-18 17:06:01.947440275 +0000 UTC m=+2182.213051174" observedRunningTime="2026-02-18 17:06:02.540067065 +0000 UTC m=+2182.805677984" watchObservedRunningTime="2026-02-18 17:06:02.548568347 +0000 UTC m=+2182.814179256" Feb 18 17:06:03 crc kubenswrapper[4812]: I0218 17:06:03.414332 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:06:03 crc kubenswrapper[4812]: I0218 17:06:03.414406 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:06:03 crc kubenswrapper[4812]: I0218 17:06:03.633023 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:03 crc kubenswrapper[4812]: I0218 17:06:03.633119 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:04 crc kubenswrapper[4812]: I0218 17:06:04.684911 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xw758" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" probeResult="failure" output=< Feb 18 17:06:04 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:06:04 crc kubenswrapper[4812]: > Feb 18 17:06:07 crc kubenswrapper[4812]: I0218 17:06:07.208383 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:07 crc kubenswrapper[4812]: I0218 17:06:07.208700 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:07 crc kubenswrapper[4812]: I0218 17:06:07.265278 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:07 crc kubenswrapper[4812]: I0218 17:06:07.610847 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:07 crc kubenswrapper[4812]: I0218 17:06:07.659410 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:06:09 crc kubenswrapper[4812]: I0218 17:06:09.584181 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bswxx" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="registry-server" containerID="cri-o://4d187ba39169f3170948054b96b9f447ee320e7d1d8275cfe058bf22775df9d9" gracePeriod=2 Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.597146 4812 generic.go:334] "Generic (PLEG): container finished" podID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerID="4d187ba39169f3170948054b96b9f447ee320e7d1d8275cfe058bf22775df9d9" exitCode=0 Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.597189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerDied","Data":"4d187ba39169f3170948054b96b9f447ee320e7d1d8275cfe058bf22775df9d9"} Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.757729 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.902849 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content\") pod \"83f701e3-cc93-4788-8734-3a3d84c0b573\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.902928 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities\") pod \"83f701e3-cc93-4788-8734-3a3d84c0b573\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.903123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5cp4\" (UniqueName: \"kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4\") pod \"83f701e3-cc93-4788-8734-3a3d84c0b573\" (UID: \"83f701e3-cc93-4788-8734-3a3d84c0b573\") " Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.904281 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities" (OuterVolumeSpecName: "utilities") pod "83f701e3-cc93-4788-8734-3a3d84c0b573" (UID: "83f701e3-cc93-4788-8734-3a3d84c0b573"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.915811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4" (OuterVolumeSpecName: "kube-api-access-g5cp4") pod "83f701e3-cc93-4788-8734-3a3d84c0b573" (UID: "83f701e3-cc93-4788-8734-3a3d84c0b573"). InnerVolumeSpecName "kube-api-access-g5cp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:06:10 crc kubenswrapper[4812]: I0218 17:06:10.962512 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f701e3-cc93-4788-8734-3a3d84c0b573" (UID: "83f701e3-cc93-4788-8734-3a3d84c0b573"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.006082 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5cp4\" (UniqueName: \"kubernetes.io/projected/83f701e3-cc93-4788-8734-3a3d84c0b573-kube-api-access-g5cp4\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.006137 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.006152 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f701e3-cc93-4788-8734-3a3d84c0b573-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.608284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bswxx" event={"ID":"83f701e3-cc93-4788-8734-3a3d84c0b573","Type":"ContainerDied","Data":"9b24f08398864753ba7777a783b418108870ecbe16910b46ed965e4335597593"} Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.608338 4812 scope.go:117] "RemoveContainer" containerID="4d187ba39169f3170948054b96b9f447ee320e7d1d8275cfe058bf22775df9d9" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.608383 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bswxx" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.651981 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.657009 4812 scope.go:117] "RemoveContainer" containerID="905779fe15bb6e56a70d6fbfcabf68e090754a588c33d0c8ab25c56c1fcbc5cf" Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.661429 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bswxx"] Feb 18 17:06:11 crc kubenswrapper[4812]: I0218 17:06:11.681918 4812 scope.go:117] "RemoveContainer" containerID="bdd1ba1eba7ad3db9124ec7429a72ace779278b8bbaab965e4698e19f85343d7" Feb 18 17:06:12 crc kubenswrapper[4812]: I0218 17:06:12.521580 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" path="/var/lib/kubelet/pods/83f701e3-cc93-4788-8734-3a3d84c0b573/volumes" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.699454 4812 scope.go:117] "RemoveContainer" containerID="eba61bef62832a6abfa682b18e1cb4b3612c6255d1c4618b047a3f1bfe62cb30" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.701271 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xw758" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" probeResult="failure" output=< Feb 18 17:06:14 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:06:14 crc kubenswrapper[4812]: > Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.729525 4812 scope.go:117] "RemoveContainer" containerID="6c895d1876bf404d61fd7a1fd326169a88f6a32dfa786e38a728dcb4a654500f" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.782372 4812 scope.go:117] "RemoveContainer" containerID="ed9ee3f4de92f750609e9cf2b5d1f4f4f36ba3f6d51819f2500aa8436c18e7b0" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.843516 4812 scope.go:117] "RemoveContainer" containerID="a730318c76bab26565d75718023912c223bc33527a42f755435fe477c0d014f9" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.907126 4812 scope.go:117] "RemoveContainer" containerID="c7509139e6fa8fa08532972bc94488999ef707df85725a7d81784410cded7af9" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.968114 4812 scope.go:117] "RemoveContainer" containerID="f337f4b82249c74aec58fc00579893bc80b47df97ec12e8d93c058270b06145d" Feb 18 17:06:14 crc kubenswrapper[4812]: I0218 17:06:14.998139 4812 scope.go:117] "RemoveContainer" containerID="29d9d5651b3445da5608d0593e9203023aac128afb2786d4aab574f71cf916f7" Feb 18 17:06:15 crc kubenswrapper[4812]: I0218 17:06:15.024424 4812 scope.go:117] "RemoveContainer" containerID="f16c6f19ff0a675905f40cc8ed610c1f2889f24418838793f7e2cb03a98ddf0f" Feb 18 17:06:23 crc kubenswrapper[4812]: I0218 17:06:23.686741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:23 crc kubenswrapper[4812]: I0218 17:06:23.736938 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:24 crc kubenswrapper[4812]: I0218 17:06:24.519707 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:06:24 crc kubenswrapper[4812]: I0218 17:06:24.724106 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xw758" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" containerID="cri-o://f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2" gracePeriod=2 Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.200526 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.307744 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2p77\" (UniqueName: \"kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77\") pod \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.307813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities\") pod \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.308106 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content\") pod \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\" (UID: \"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7\") " Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.308702 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities" (OuterVolumeSpecName: "utilities") pod "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" (UID: "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.314073 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77" (OuterVolumeSpecName: "kube-api-access-p2p77") pod "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" (UID: "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7"). InnerVolumeSpecName "kube-api-access-p2p77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.410667 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2p77\" (UniqueName: \"kubernetes.io/projected/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-kube-api-access-p2p77\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.410705 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.441013 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" (UID: "8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.512020 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.741188 4812 generic.go:334] "Generic (PLEG): container finished" podID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerID="f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2" exitCode=0 Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.741217 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw758" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.741248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerDied","Data":"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2"} Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.742606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw758" event={"ID":"8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7","Type":"ContainerDied","Data":"02d65c5802d097f6a568654d5ef0e4266b173bd82dcd13df56980efa1cc90f30"} Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.742632 4812 scope.go:117] "RemoveContainer" containerID="f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.773022 4812 scope.go:117] "RemoveContainer" containerID="1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.785355 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.794301 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xw758"] Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.806025 4812 scope.go:117] "RemoveContainer" containerID="0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.836724 4812 scope.go:117] "RemoveContainer" containerID="f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2" Feb 18 17:06:25 crc kubenswrapper[4812]: E0218 17:06:25.837171 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2\": container with ID starting with f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2 not found: ID does not exist" containerID="f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.837203 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2"} err="failed to get container status \"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2\": rpc error: code = NotFound desc = could not find container \"f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2\": container with ID starting with f1602d711ab2b2d21af201cf780e52546728be7d5d9ef615a44ab94dd80052b2 not found: ID does not exist" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.837224 4812 scope.go:117] "RemoveContainer" containerID="1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7" Feb 18 17:06:25 crc kubenswrapper[4812]: E0218 17:06:25.837449 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7\": container with ID starting with 1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7 not found: ID does not exist" containerID="1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.837473 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7"} err="failed to get container status \"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7\": rpc error: code = NotFound desc = could not find container \"1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7\": container with ID starting with 1f2cbed38fce987c7e21651280d6f4cca75134eb2efc600d4e096a8d9e8466e7 not found: ID does not exist" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.837491 4812 scope.go:117] "RemoveContainer" containerID="0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330" Feb 18 17:06:25 crc kubenswrapper[4812]: E0218 17:06:25.837658 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330\": container with ID starting with 0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330 not found: ID does not exist" containerID="0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330" Feb 18 17:06:25 crc kubenswrapper[4812]: I0218 17:06:25.837679 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330"} err="failed to get container status \"0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330\": rpc error: code = NotFound desc = could not find container \"0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330\": container with ID starting with 0d171301adfb6f00bada5223a68c00177b0a7ea294e60a2162fe013a9ee4a330 not found: ID does not exist" Feb 18 17:06:26 crc kubenswrapper[4812]: I0218 17:06:26.519791 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" path="/var/lib/kubelet/pods/8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7/volumes" Feb 18 17:06:29 crc kubenswrapper[4812]: I0218 17:06:29.049869 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tv8tp"] Feb 18 17:06:29 crc kubenswrapper[4812]: I0218 17:06:29.061420 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tv8tp"] Feb 18 17:06:30 crc kubenswrapper[4812]: I0218 17:06:30.518711 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="432ecdb1-393e-4454-a386-3134c792b4cc" path="/var/lib/kubelet/pods/432ecdb1-393e-4454-a386-3134c792b4cc/volumes" Feb 18 17:06:33 crc kubenswrapper[4812]: I0218 17:06:33.415282 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:06:33 crc kubenswrapper[4812]: I0218 17:06:33.416261 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:06:44 crc kubenswrapper[4812]: I0218 17:06:44.933802 4812 generic.go:334] "Generic (PLEG): container finished" podID="f98d198a-6397-422d-b0d0-0ec0d74e7f83" containerID="900bc2dbcbbfd4953f123819649d0acce2ed642d26c5566ce8bc35c0ae9ad86e" exitCode=0 Feb 18 17:06:44 crc kubenswrapper[4812]: I0218 17:06:44.933920 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" event={"ID":"f98d198a-6397-422d-b0d0-0ec0d74e7f83","Type":"ContainerDied","Data":"900bc2dbcbbfd4953f123819649d0acce2ed642d26c5566ce8bc35c0ae9ad86e"} Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.378344 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.473992 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnd8n\" (UniqueName: \"kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n\") pod \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.474056 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory\") pod \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.474122 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam\") pod \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\" (UID: \"f98d198a-6397-422d-b0d0-0ec0d74e7f83\") " Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.490267 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n" (OuterVolumeSpecName: "kube-api-access-dnd8n") pod "f98d198a-6397-422d-b0d0-0ec0d74e7f83" (UID: "f98d198a-6397-422d-b0d0-0ec0d74e7f83"). InnerVolumeSpecName "kube-api-access-dnd8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.530558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f98d198a-6397-422d-b0d0-0ec0d74e7f83" (UID: "f98d198a-6397-422d-b0d0-0ec0d74e7f83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.576669 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnd8n\" (UniqueName: \"kubernetes.io/projected/f98d198a-6397-422d-b0d0-0ec0d74e7f83-kube-api-access-dnd8n\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.576712 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.620225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory" (OuterVolumeSpecName: "inventory") pod "f98d198a-6397-422d-b0d0-0ec0d74e7f83" (UID: "f98d198a-6397-422d-b0d0-0ec0d74e7f83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.679253 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f98d198a-6397-422d-b0d0-0ec0d74e7f83-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.953844 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" event={"ID":"f98d198a-6397-422d-b0d0-0ec0d74e7f83","Type":"ContainerDied","Data":"ad70ea92d61a59a2534ef26586bd7a994a62b338876dae53bd72fce51eeb40c3"} Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.954177 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad70ea92d61a59a2534ef26586bd7a994a62b338876dae53bd72fce51eeb40c3" Feb 18 17:06:46 crc kubenswrapper[4812]: I0218 17:06:46.954037 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.071849 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v"] Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072526 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072550 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072566 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072574 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072634 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="extract-content" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072642 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="extract-content" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072661 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="extract-content" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072668 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="extract-content" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072711 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="extract-utilities" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072719 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="extract-utilities" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072735 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="extract-utilities" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072742 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="extract-utilities" Feb 18 17:06:47 crc kubenswrapper[4812]: E0218 17:06:47.072773 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98d198a-6397-422d-b0d0-0ec0d74e7f83" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.072781 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98d198a-6397-422d-b0d0-0ec0d74e7f83" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.073059 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c535fb3-b9e5-437c-9d1f-ae2ed3ebb5c7" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.073133 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f701e3-cc93-4788-8734-3a3d84c0b573" containerName="registry-server" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.073150 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98d198a-6397-422d-b0d0-0ec0d74e7f83" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.074313 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.081066 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.081449 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.081972 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.082144 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.096544 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v"] Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.128756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmgn\" (UniqueName: \"kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.128830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.128868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.230948 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkmgn\" (UniqueName: \"kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.231041 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.231087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.238696 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.240584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.248552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkmgn\" (UniqueName: \"kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.409169 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:47 crc kubenswrapper[4812]: I0218 17:06:47.987910 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v"] Feb 18 17:06:48 crc kubenswrapper[4812]: I0218 17:06:48.978956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" event={"ID":"ccbed1c0-c019-49d0-9c31-3e16f1254d9b","Type":"ContainerStarted","Data":"e6438895cd161620520bd182096c487e011eb75f6673e21c02420061a64e5c71"} Feb 18 17:06:49 crc kubenswrapper[4812]: I0218 17:06:49.991890 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" event={"ID":"ccbed1c0-c019-49d0-9c31-3e16f1254d9b","Type":"ContainerStarted","Data":"0f4fa5371d53e1013af3fee461b8fd5d46cd9eaeb81f313a66b3728890a0c92d"} Feb 18 17:06:50 crc kubenswrapper[4812]: I0218 17:06:50.028664 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" podStartSLOduration=2.226457732 podStartE2EDuration="3.028632717s" podCreationTimestamp="2026-02-18 17:06:47 +0000 UTC" firstStartedPulling="2026-02-18 17:06:47.993693418 +0000 UTC m=+2228.259304327" lastFinishedPulling="2026-02-18 17:06:48.795868403 +0000 UTC m=+2229.061479312" observedRunningTime="2026-02-18 17:06:50.021015697 +0000 UTC m=+2230.286626606" watchObservedRunningTime="2026-02-18 17:06:50.028632717 +0000 UTC m=+2230.294243636" Feb 18 17:06:51 crc kubenswrapper[4812]: I0218 17:06:51.045153 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zdg8w"] Feb 18 17:06:51 crc kubenswrapper[4812]: I0218 17:06:51.055578 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zdg8w"] Feb 18 17:06:52 crc kubenswrapper[4812]: I0218 17:06:52.521282 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4fadf8-05f1-41b5-bfba-e5af3ce82f99" path="/var/lib/kubelet/pods/3a4fadf8-05f1-41b5-bfba-e5af3ce82f99/volumes" Feb 18 17:06:54 crc kubenswrapper[4812]: I0218 17:06:54.039626 4812 generic.go:334] "Generic (PLEG): container finished" podID="ccbed1c0-c019-49d0-9c31-3e16f1254d9b" containerID="0f4fa5371d53e1013af3fee461b8fd5d46cd9eaeb81f313a66b3728890a0c92d" exitCode=0 Feb 18 17:06:54 crc kubenswrapper[4812]: I0218 17:06:54.039689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" event={"ID":"ccbed1c0-c019-49d0-9c31-3e16f1254d9b","Type":"ContainerDied","Data":"0f4fa5371d53e1013af3fee461b8fd5d46cd9eaeb81f313a66b3728890a0c92d"} Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.421649 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.510948 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam\") pod \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.510989 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory\") pod \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.511010 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkmgn\" (UniqueName: \"kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn\") pod \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\" (UID: \"ccbed1c0-c019-49d0-9c31-3e16f1254d9b\") " Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.517012 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn" (OuterVolumeSpecName: "kube-api-access-kkmgn") pod "ccbed1c0-c019-49d0-9c31-3e16f1254d9b" (UID: "ccbed1c0-c019-49d0-9c31-3e16f1254d9b"). InnerVolumeSpecName "kube-api-access-kkmgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.540338 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory" (OuterVolumeSpecName: "inventory") pod "ccbed1c0-c019-49d0-9c31-3e16f1254d9b" (UID: "ccbed1c0-c019-49d0-9c31-3e16f1254d9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.550409 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccbed1c0-c019-49d0-9c31-3e16f1254d9b" (UID: "ccbed1c0-c019-49d0-9c31-3e16f1254d9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.613744 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.614009 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:55 crc kubenswrapper[4812]: I0218 17:06:55.614021 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkmgn\" (UniqueName: \"kubernetes.io/projected/ccbed1c0-c019-49d0-9c31-3e16f1254d9b-kube-api-access-kkmgn\") on node \"crc\" DevicePath \"\"" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.063644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" event={"ID":"ccbed1c0-c019-49d0-9c31-3e16f1254d9b","Type":"ContainerDied","Data":"e6438895cd161620520bd182096c487e011eb75f6673e21c02420061a64e5c71"} Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.063726 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6438895cd161620520bd182096c487e011eb75f6673e21c02420061a64e5c71" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.063875 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.148133 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr"] Feb 18 17:06:56 crc kubenswrapper[4812]: E0218 17:06:56.148964 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccbed1c0-c019-49d0-9c31-3e16f1254d9b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.149015 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccbed1c0-c019-49d0-9c31-3e16f1254d9b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.149543 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccbed1c0-c019-49d0-9c31-3e16f1254d9b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.150876 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.157894 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.157903 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.157919 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.158627 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.175739 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr"] Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.226682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.226919 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nh5c\" (UniqueName: \"kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.227372 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.330583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.331261 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.331530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nh5c\" (UniqueName: \"kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.339358 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.339733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.353090 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nh5c\" (UniqueName: \"kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fxpqr\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:56 crc kubenswrapper[4812]: I0218 17:06:56.477873 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:06:57 crc kubenswrapper[4812]: I0218 17:06:57.066359 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr"] Feb 18 17:06:58 crc kubenswrapper[4812]: I0218 17:06:58.084858 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" event={"ID":"d671777c-dea8-4fb2-b203-40fa52f9b093","Type":"ContainerStarted","Data":"fd65914c2f480c7233714641082b7971568a7fc1ea3cde318efa80f46589b533"} Feb 18 17:06:58 crc kubenswrapper[4812]: I0218 17:06:58.085252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" event={"ID":"d671777c-dea8-4fb2-b203-40fa52f9b093","Type":"ContainerStarted","Data":"f25b20c8059bcac5f58eaa61551c6486343723fa5b0dca8aa0f6f968fb783cc9"} Feb 18 17:06:58 crc kubenswrapper[4812]: I0218 17:06:58.110792 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" podStartSLOduration=1.696015346 podStartE2EDuration="2.1107697s" podCreationTimestamp="2026-02-18 17:06:56 +0000 UTC" firstStartedPulling="2026-02-18 17:06:57.084617239 +0000 UTC m=+2237.350228188" lastFinishedPulling="2026-02-18 17:06:57.499371623 +0000 UTC m=+2237.764982542" observedRunningTime="2026-02-18 17:06:58.099408038 +0000 UTC m=+2238.365018967" watchObservedRunningTime="2026-02-18 17:06:58.1107697 +0000 UTC m=+2238.376380599" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.104172 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.107086 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.125147 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.244157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lhj\" (UniqueName: \"kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.244278 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.244321 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.345813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.345994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lhj\" (UniqueName: \"kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.346065 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.346747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.346789 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.366399 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lhj\" (UniqueName: \"kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj\") pod \"redhat-marketplace-7d8g5\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.431490 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:00 crc kubenswrapper[4812]: I0218 17:07:00.921588 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:00 crc kubenswrapper[4812]: W0218 17:07:00.926205 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77934b98_9fe0_473b_95be_eaae1d9f1b81.slice/crio-80ff06d6ed208243bae08807f345a62095e2078f3af23f6d5cb1c203c03749a8 WatchSource:0}: Error finding container 80ff06d6ed208243bae08807f345a62095e2078f3af23f6d5cb1c203c03749a8: Status 404 returned error can't find the container with id 80ff06d6ed208243bae08807f345a62095e2078f3af23f6d5cb1c203c03749a8 Feb 18 17:07:01 crc kubenswrapper[4812]: I0218 17:07:01.146006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerStarted","Data":"80ff06d6ed208243bae08807f345a62095e2078f3af23f6d5cb1c203c03749a8"} Feb 18 17:07:02 crc kubenswrapper[4812]: I0218 17:07:02.156555 4812 generic.go:334] "Generic (PLEG): container finished" podID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerID="261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685" exitCode=0 Feb 18 17:07:02 crc kubenswrapper[4812]: I0218 17:07:02.156612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerDied","Data":"261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685"} Feb 18 17:07:03 crc kubenswrapper[4812]: I0218 17:07:03.414092 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:07:03 crc kubenswrapper[4812]: I0218 17:07:03.414503 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:07:03 crc kubenswrapper[4812]: I0218 17:07:03.414564 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:07:03 crc kubenswrapper[4812]: I0218 17:07:03.415258 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:07:03 crc kubenswrapper[4812]: I0218 17:07:03.415327 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" gracePeriod=600 Feb 18 17:07:03 crc kubenswrapper[4812]: E0218 17:07:03.534857 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.176941 4812 generic.go:334] "Generic (PLEG): container finished" podID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerID="132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56" exitCode=0 Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.177074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerDied","Data":"132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56"} Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.181410 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" exitCode=0 Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.181457 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c"} Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.181489 4812 scope.go:117] "RemoveContainer" containerID="e006abe30502230a9a1f8befb69a558145d4db487ddc92e7dda052374357a05b" Feb 18 17:07:04 crc kubenswrapper[4812]: I0218 17:07:04.182167 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:07:04 crc kubenswrapper[4812]: E0218 17:07:04.182460 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:05 crc kubenswrapper[4812]: I0218 17:07:05.192230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerStarted","Data":"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954"} Feb 18 17:07:05 crc kubenswrapper[4812]: I0218 17:07:05.215671 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7d8g5" podStartSLOduration=2.786109999 podStartE2EDuration="5.215650979s" podCreationTimestamp="2026-02-18 17:07:00 +0000 UTC" firstStartedPulling="2026-02-18 17:07:02.159477922 +0000 UTC m=+2242.425088831" lastFinishedPulling="2026-02-18 17:07:04.589018892 +0000 UTC m=+2244.854629811" observedRunningTime="2026-02-18 17:07:05.209316302 +0000 UTC m=+2245.474927211" watchObservedRunningTime="2026-02-18 17:07:05.215650979 +0000 UTC m=+2245.481261888" Feb 18 17:07:10 crc kubenswrapper[4812]: I0218 17:07:10.432321 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:10 crc kubenswrapper[4812]: I0218 17:07:10.433957 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:10 crc kubenswrapper[4812]: I0218 17:07:10.484136 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:11 crc kubenswrapper[4812]: I0218 17:07:11.312943 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:11 crc kubenswrapper[4812]: I0218 17:07:11.360398 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:12 crc kubenswrapper[4812]: I0218 17:07:12.043122 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgl82"] Feb 18 17:07:12 crc kubenswrapper[4812]: I0218 17:07:12.053540 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vgl82"] Feb 18 17:07:12 crc kubenswrapper[4812]: I0218 17:07:12.521482 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd32627-112f-4222-913d-a675587a7472" path="/var/lib/kubelet/pods/7dd32627-112f-4222-913d-a675587a7472/volumes" Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.272481 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7d8g5" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="registry-server" containerID="cri-o://f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954" gracePeriod=2 Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.795072 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.966772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content\") pod \"77934b98-9fe0-473b-95be-eaae1d9f1b81\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.966890 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities\") pod \"77934b98-9fe0-473b-95be-eaae1d9f1b81\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.966934 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24lhj\" (UniqueName: \"kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj\") pod \"77934b98-9fe0-473b-95be-eaae1d9f1b81\" (UID: \"77934b98-9fe0-473b-95be-eaae1d9f1b81\") " Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.967721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities" (OuterVolumeSpecName: "utilities") pod "77934b98-9fe0-473b-95be-eaae1d9f1b81" (UID: "77934b98-9fe0-473b-95be-eaae1d9f1b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:07:13 crc kubenswrapper[4812]: I0218 17:07:13.972994 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj" (OuterVolumeSpecName: "kube-api-access-24lhj") pod "77934b98-9fe0-473b-95be-eaae1d9f1b81" (UID: "77934b98-9fe0-473b-95be-eaae1d9f1b81"). InnerVolumeSpecName "kube-api-access-24lhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.013324 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77934b98-9fe0-473b-95be-eaae1d9f1b81" (UID: "77934b98-9fe0-473b-95be-eaae1d9f1b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.069872 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.069913 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77934b98-9fe0-473b-95be-eaae1d9f1b81-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.069927 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24lhj\" (UniqueName: \"kubernetes.io/projected/77934b98-9fe0-473b-95be-eaae1d9f1b81-kube-api-access-24lhj\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.281898 4812 generic.go:334] "Generic (PLEG): container finished" podID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerID="f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954" exitCode=0 Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.281950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerDied","Data":"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954"} Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.281982 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8g5" event={"ID":"77934b98-9fe0-473b-95be-eaae1d9f1b81","Type":"ContainerDied","Data":"80ff06d6ed208243bae08807f345a62095e2078f3af23f6d5cb1c203c03749a8"} Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.282001 4812 scope.go:117] "RemoveContainer" containerID="f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.282167 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8g5" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.350463 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.365886 4812 scope.go:117] "RemoveContainer" containerID="132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.375361 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8g5"] Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.422943 4812 scope.go:117] "RemoveContainer" containerID="261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.452090 4812 scope.go:117] "RemoveContainer" containerID="f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954" Feb 18 17:07:14 crc kubenswrapper[4812]: E0218 17:07:14.452602 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954\": container with ID starting with f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954 not found: ID does not exist" containerID="f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.452643 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954"} err="failed to get container status \"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954\": rpc error: code = NotFound desc = could not find container \"f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954\": container with ID starting with f36a00688b490733f02f9a6640b040498b57436b13ab9c156adf4d175e37d954 not found: ID does not exist" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.452674 4812 scope.go:117] "RemoveContainer" containerID="132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56" Feb 18 17:07:14 crc kubenswrapper[4812]: E0218 17:07:14.453550 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56\": container with ID starting with 132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56 not found: ID does not exist" containerID="132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.453593 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56"} err="failed to get container status \"132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56\": rpc error: code = NotFound desc = could not find container \"132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56\": container with ID starting with 132a09d43ef48448ef7b2e54d1a6486de55fa5edf99802986468c9977fb60c56 not found: ID does not exist" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.453617 4812 scope.go:117] "RemoveContainer" containerID="261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685" Feb 18 17:07:14 crc kubenswrapper[4812]: E0218 17:07:14.453899 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685\": container with ID starting with 261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685 not found: ID does not exist" containerID="261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.453927 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685"} err="failed to get container status \"261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685\": rpc error: code = NotFound desc = could not find container \"261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685\": container with ID starting with 261895496e18ee2c91e97dd9255b93bbd25554c2f188a260774c46b665577685 not found: ID does not exist" Feb 18 17:07:14 crc kubenswrapper[4812]: I0218 17:07:14.520326 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" path="/var/lib/kubelet/pods/77934b98-9fe0-473b-95be-eaae1d9f1b81/volumes" Feb 18 17:07:15 crc kubenswrapper[4812]: I0218 17:07:15.216887 4812 scope.go:117] "RemoveContainer" containerID="246d84e31991b7591eb2f6a6eef20aad4ab31f785f5abefa518991ca14058adc" Feb 18 17:07:15 crc kubenswrapper[4812]: I0218 17:07:15.266565 4812 scope.go:117] "RemoveContainer" containerID="900ac1e7823b61cb43d8b708767522ffe936d94978851eab7a706aeb5300b2d6" Feb 18 17:07:15 crc kubenswrapper[4812]: I0218 17:07:15.315443 4812 scope.go:117] "RemoveContainer" containerID="7f21b1116b40c0468682137921a45b955c952440c4edb296e7781ec67d12af3b" Feb 18 17:07:16 crc kubenswrapper[4812]: I0218 17:07:16.508878 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:07:16 crc kubenswrapper[4812]: E0218 17:07:16.509722 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:27 crc kubenswrapper[4812]: I0218 17:07:27.508392 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:07:27 crc kubenswrapper[4812]: E0218 17:07:27.509967 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:33 crc kubenswrapper[4812]: I0218 17:07:33.498570 4812 generic.go:334] "Generic (PLEG): container finished" podID="d671777c-dea8-4fb2-b203-40fa52f9b093" containerID="fd65914c2f480c7233714641082b7971568a7fc1ea3cde318efa80f46589b533" exitCode=0 Feb 18 17:07:33 crc kubenswrapper[4812]: I0218 17:07:33.498619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" event={"ID":"d671777c-dea8-4fb2-b203-40fa52f9b093","Type":"ContainerDied","Data":"fd65914c2f480c7233714641082b7971568a7fc1ea3cde318efa80f46589b533"} Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.046870 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.206877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory\") pod \"d671777c-dea8-4fb2-b203-40fa52f9b093\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.206979 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nh5c\" (UniqueName: \"kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c\") pod \"d671777c-dea8-4fb2-b203-40fa52f9b093\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.207191 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam\") pod \"d671777c-dea8-4fb2-b203-40fa52f9b093\" (UID: \"d671777c-dea8-4fb2-b203-40fa52f9b093\") " Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.214541 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c" (OuterVolumeSpecName: "kube-api-access-7nh5c") pod "d671777c-dea8-4fb2-b203-40fa52f9b093" (UID: "d671777c-dea8-4fb2-b203-40fa52f9b093"). InnerVolumeSpecName "kube-api-access-7nh5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.239942 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d671777c-dea8-4fb2-b203-40fa52f9b093" (UID: "d671777c-dea8-4fb2-b203-40fa52f9b093"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.240646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory" (OuterVolumeSpecName: "inventory") pod "d671777c-dea8-4fb2-b203-40fa52f9b093" (UID: "d671777c-dea8-4fb2-b203-40fa52f9b093"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.309871 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.309911 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d671777c-dea8-4fb2-b203-40fa52f9b093-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.309922 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nh5c\" (UniqueName: \"kubernetes.io/projected/d671777c-dea8-4fb2-b203-40fa52f9b093-kube-api-access-7nh5c\") on node \"crc\" DevicePath \"\"" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.521840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" event={"ID":"d671777c-dea8-4fb2-b203-40fa52f9b093","Type":"ContainerDied","Data":"f25b20c8059bcac5f58eaa61551c6486343723fa5b0dca8aa0f6f968fb783cc9"} Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.522245 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25b20c8059bcac5f58eaa61551c6486343723fa5b0dca8aa0f6f968fb783cc9" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.522045 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fxpqr" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.622896 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk"] Feb 18 17:07:35 crc kubenswrapper[4812]: E0218 17:07:35.623387 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="extract-utilities" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623409 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="extract-utilities" Feb 18 17:07:35 crc kubenswrapper[4812]: E0218 17:07:35.623442 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d671777c-dea8-4fb2-b203-40fa52f9b093" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623452 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d671777c-dea8-4fb2-b203-40fa52f9b093" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:07:35 crc kubenswrapper[4812]: E0218 17:07:35.623488 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="registry-server" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623496 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="registry-server" Feb 18 17:07:35 crc kubenswrapper[4812]: E0218 17:07:35.623511 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="extract-content" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623519 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="extract-content" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623756 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="77934b98-9fe0-473b-95be-eaae1d9f1b81" containerName="registry-server" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.623778 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d671777c-dea8-4fb2-b203-40fa52f9b093" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.624969 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.627415 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.628198 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.632759 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.634113 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.641530 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk"] Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.718705 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.718818 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fkcv\" (UniqueName: \"kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.718857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.821129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.821310 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fkcv\" (UniqueName: \"kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.821355 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.825498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.826208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.838071 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fkcv\" (UniqueName: \"kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:35 crc kubenswrapper[4812]: I0218 17:07:35.945810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:07:36 crc kubenswrapper[4812]: I0218 17:07:36.480572 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk"] Feb 18 17:07:36 crc kubenswrapper[4812]: W0218 17:07:36.484468 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be91d27_6c6b_4713_9845_4d582116ff6f.slice/crio-a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b WatchSource:0}: Error finding container a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b: Status 404 returned error can't find the container with id a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b Feb 18 17:07:36 crc kubenswrapper[4812]: I0218 17:07:36.534523 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" event={"ID":"6be91d27-6c6b-4713-9845-4d582116ff6f","Type":"ContainerStarted","Data":"a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b"} Feb 18 17:07:38 crc kubenswrapper[4812]: I0218 17:07:38.557181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" event={"ID":"6be91d27-6c6b-4713-9845-4d582116ff6f","Type":"ContainerStarted","Data":"8c2c381b2901aa31135648536e5bb9903416f20f850c565686d48afd9e6d28f7"} Feb 18 17:07:38 crc kubenswrapper[4812]: I0218 17:07:38.574753 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" podStartSLOduration=1.9928628929999999 podStartE2EDuration="3.574712118s" podCreationTimestamp="2026-02-18 17:07:35 +0000 UTC" firstStartedPulling="2026-02-18 17:07:36.487467571 +0000 UTC m=+2276.753078480" lastFinishedPulling="2026-02-18 17:07:38.069316796 +0000 UTC m=+2278.334927705" observedRunningTime="2026-02-18 17:07:38.572310408 +0000 UTC m=+2278.837921317" watchObservedRunningTime="2026-02-18 17:07:38.574712118 +0000 UTC m=+2278.840323027" Feb 18 17:07:40 crc kubenswrapper[4812]: I0218 17:07:40.519000 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:07:40 crc kubenswrapper[4812]: E0218 17:07:40.519768 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:44 crc kubenswrapper[4812]: I0218 17:07:44.049762 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85srd"] Feb 18 17:07:44 crc kubenswrapper[4812]: I0218 17:07:44.060489 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-85srd"] Feb 18 17:07:44 crc kubenswrapper[4812]: I0218 17:07:44.520346 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c79f36-851f-461c-87e0-72071e1b7e22" path="/var/lib/kubelet/pods/77c79f36-851f-461c-87e0-72071e1b7e22/volumes" Feb 18 17:07:51 crc kubenswrapper[4812]: I0218 17:07:51.507684 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:07:51 crc kubenswrapper[4812]: E0218 17:07:51.508244 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:07:59 crc kubenswrapper[4812]: I0218 17:07:59.036038 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fxqc"] Feb 18 17:07:59 crc kubenswrapper[4812]: I0218 17:07:59.044237 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5fxqc"] Feb 18 17:08:00 crc kubenswrapper[4812]: I0218 17:08:00.520506 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3f37a0-cac2-4ac9-a087-ef87868855f7" path="/var/lib/kubelet/pods/dc3f37a0-cac2-4ac9-a087-ef87868855f7/volumes" Feb 18 17:08:03 crc kubenswrapper[4812]: I0218 17:08:03.508088 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:08:03 crc kubenswrapper[4812]: E0218 17:08:03.508664 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:08:15 crc kubenswrapper[4812]: I0218 17:08:15.456999 4812 scope.go:117] "RemoveContainer" containerID="fac5100eb8620201776fa1124ccacdbfb55267c5780e4777974f050f163d3f54" Feb 18 17:08:15 crc kubenswrapper[4812]: I0218 17:08:15.520344 4812 scope.go:117] "RemoveContainer" containerID="3e4c35349c11fde8fcd51f4205b1fe847d6a381038f7f4dcc4cff42b3bcfe304" Feb 18 17:08:18 crc kubenswrapper[4812]: I0218 17:08:18.509588 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:08:18 crc kubenswrapper[4812]: E0218 17:08:18.510516 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:08:24 crc kubenswrapper[4812]: E0218 17:08:24.729207 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be91d27_6c6b_4713_9845_4d582116ff6f.slice/crio-conmon-8c2c381b2901aa31135648536e5bb9903416f20f850c565686d48afd9e6d28f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be91d27_6c6b_4713_9845_4d582116ff6f.slice/crio-8c2c381b2901aa31135648536e5bb9903416f20f850c565686d48afd9e6d28f7.scope\": RecentStats: unable to find data in memory cache]" Feb 18 17:08:25 crc kubenswrapper[4812]: I0218 17:08:25.135580 4812 generic.go:334] "Generic (PLEG): container finished" podID="6be91d27-6c6b-4713-9845-4d582116ff6f" containerID="8c2c381b2901aa31135648536e5bb9903416f20f850c565686d48afd9e6d28f7" exitCode=0 Feb 18 17:08:25 crc kubenswrapper[4812]: I0218 17:08:25.135692 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" event={"ID":"6be91d27-6c6b-4713-9845-4d582116ff6f","Type":"ContainerDied","Data":"8c2c381b2901aa31135648536e5bb9903416f20f850c565686d48afd9e6d28f7"} Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.582248 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.705746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory\") pod \"6be91d27-6c6b-4713-9845-4d582116ff6f\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.705896 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fkcv\" (UniqueName: \"kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv\") pod \"6be91d27-6c6b-4713-9845-4d582116ff6f\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.705953 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam\") pod \"6be91d27-6c6b-4713-9845-4d582116ff6f\" (UID: \"6be91d27-6c6b-4713-9845-4d582116ff6f\") " Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.718493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv" (OuterVolumeSpecName: "kube-api-access-4fkcv") pod "6be91d27-6c6b-4713-9845-4d582116ff6f" (UID: "6be91d27-6c6b-4713-9845-4d582116ff6f"). InnerVolumeSpecName "kube-api-access-4fkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.736846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory" (OuterVolumeSpecName: "inventory") pod "6be91d27-6c6b-4713-9845-4d582116ff6f" (UID: "6be91d27-6c6b-4713-9845-4d582116ff6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.742662 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6be91d27-6c6b-4713-9845-4d582116ff6f" (UID: "6be91d27-6c6b-4713-9845-4d582116ff6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.808831 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.808870 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fkcv\" (UniqueName: \"kubernetes.io/projected/6be91d27-6c6b-4713-9845-4d582116ff6f-kube-api-access-4fkcv\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:26 crc kubenswrapper[4812]: I0218 17:08:26.808881 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be91d27-6c6b-4713-9845-4d582116ff6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.203561 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" event={"ID":"6be91d27-6c6b-4713-9845-4d582116ff6f","Type":"ContainerDied","Data":"a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b"} Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.206236 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5392b2c09e745f38ba9538eae5bb2d403bc2121fe28bb7d3563feb3bca3654b" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.206164 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.257250 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-56nkv"] Feb 18 17:08:27 crc kubenswrapper[4812]: E0218 17:08:27.257845 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be91d27-6c6b-4713-9845-4d582116ff6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.257874 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be91d27-6c6b-4713-9845-4d582116ff6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.258123 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be91d27-6c6b-4713-9845-4d582116ff6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.259078 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.264539 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.264812 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.265077 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.265182 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.271544 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-56nkv"] Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.319282 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.319553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8bz6\" (UniqueName: \"kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.319727 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.421422 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.421683 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8bz6\" (UniqueName: \"kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.421783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.425408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.426026 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.437820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8bz6\" (UniqueName: \"kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6\") pod \"ssh-known-hosts-edpm-deployment-56nkv\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:27 crc kubenswrapper[4812]: I0218 17:08:27.582505 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:28 crc kubenswrapper[4812]: I0218 17:08:28.130554 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-56nkv"] Feb 18 17:08:28 crc kubenswrapper[4812]: I0218 17:08:28.141019 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:08:28 crc kubenswrapper[4812]: I0218 17:08:28.216598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" event={"ID":"f60f3d41-33cb-4204-a290-d5bc374f6116","Type":"ContainerStarted","Data":"f11f52c26349fe07c560e91bd4cb6994895b49d617fc6561ef822a708cc543a4"} Feb 18 17:08:29 crc kubenswrapper[4812]: I0218 17:08:29.229718 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" event={"ID":"f60f3d41-33cb-4204-a290-d5bc374f6116","Type":"ContainerStarted","Data":"8163dfb7a04cda5069e21b0b3b331a3deedd65339634540c178351986e5cfde1"} Feb 18 17:08:29 crc kubenswrapper[4812]: I0218 17:08:29.247824 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" podStartSLOduration=1.5761624300000001 podStartE2EDuration="2.247802787s" podCreationTimestamp="2026-02-18 17:08:27 +0000 UTC" firstStartedPulling="2026-02-18 17:08:28.140822972 +0000 UTC m=+2328.406433881" lastFinishedPulling="2026-02-18 17:08:28.812463329 +0000 UTC m=+2329.078074238" observedRunningTime="2026-02-18 17:08:29.243403147 +0000 UTC m=+2329.509014056" watchObservedRunningTime="2026-02-18 17:08:29.247802787 +0000 UTC m=+2329.513413686" Feb 18 17:08:33 crc kubenswrapper[4812]: I0218 17:08:33.509675 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:08:33 crc kubenswrapper[4812]: E0218 17:08:33.510773 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:08:36 crc kubenswrapper[4812]: I0218 17:08:36.297292 4812 generic.go:334] "Generic (PLEG): container finished" podID="f60f3d41-33cb-4204-a290-d5bc374f6116" containerID="8163dfb7a04cda5069e21b0b3b331a3deedd65339634540c178351986e5cfde1" exitCode=0 Feb 18 17:08:36 crc kubenswrapper[4812]: I0218 17:08:36.297365 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" event={"ID":"f60f3d41-33cb-4204-a290-d5bc374f6116","Type":"ContainerDied","Data":"8163dfb7a04cda5069e21b0b3b331a3deedd65339634540c178351986e5cfde1"} Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.784954 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.870426 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8bz6\" (UniqueName: \"kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6\") pod \"f60f3d41-33cb-4204-a290-d5bc374f6116\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.871283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam\") pod \"f60f3d41-33cb-4204-a290-d5bc374f6116\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.871369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0\") pod \"f60f3d41-33cb-4204-a290-d5bc374f6116\" (UID: \"f60f3d41-33cb-4204-a290-d5bc374f6116\") " Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.877832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6" (OuterVolumeSpecName: "kube-api-access-n8bz6") pod "f60f3d41-33cb-4204-a290-d5bc374f6116" (UID: "f60f3d41-33cb-4204-a290-d5bc374f6116"). InnerVolumeSpecName "kube-api-access-n8bz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.920245 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f60f3d41-33cb-4204-a290-d5bc374f6116" (UID: "f60f3d41-33cb-4204-a290-d5bc374f6116"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.971305 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f60f3d41-33cb-4204-a290-d5bc374f6116" (UID: "f60f3d41-33cb-4204-a290-d5bc374f6116"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.975414 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8bz6\" (UniqueName: \"kubernetes.io/projected/f60f3d41-33cb-4204-a290-d5bc374f6116-kube-api-access-n8bz6\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.975447 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:37 crc kubenswrapper[4812]: I0218 17:08:37.975459 4812 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f60f3d41-33cb-4204-a290-d5bc374f6116-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.328082 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" event={"ID":"f60f3d41-33cb-4204-a290-d5bc374f6116","Type":"ContainerDied","Data":"f11f52c26349fe07c560e91bd4cb6994895b49d617fc6561ef822a708cc543a4"} Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.328335 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11f52c26349fe07c560e91bd4cb6994895b49d617fc6561ef822a708cc543a4" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.328231 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-56nkv" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.496523 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb"] Feb 18 17:08:38 crc kubenswrapper[4812]: E0218 17:08:38.497027 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60f3d41-33cb-4204-a290-d5bc374f6116" containerName="ssh-known-hosts-edpm-deployment" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.497054 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60f3d41-33cb-4204-a290-d5bc374f6116" containerName="ssh-known-hosts-edpm-deployment" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.497347 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60f3d41-33cb-4204-a290-d5bc374f6116" containerName="ssh-known-hosts-edpm-deployment" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.498207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.501516 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.501797 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.502969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.511306 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.531455 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb"] Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.593553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.593734 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.593847 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmz4\" (UniqueName: \"kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.696277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.696369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmz4\" (UniqueName: \"kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.697174 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.700223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.700993 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.716402 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmz4\" (UniqueName: \"kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-znbmb\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:38 crc kubenswrapper[4812]: I0218 17:08:38.828188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:39 crc kubenswrapper[4812]: I0218 17:08:39.366864 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb"] Feb 18 17:08:40 crc kubenswrapper[4812]: I0218 17:08:40.345387 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" event={"ID":"65be9e89-0994-447a-a008-f08ad56b0371","Type":"ContainerStarted","Data":"9bba556e9c6440e215e9ae6c82b75e25018720d87a37393181444b37b52599aa"} Feb 18 17:08:40 crc kubenswrapper[4812]: I0218 17:08:40.345729 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" event={"ID":"65be9e89-0994-447a-a008-f08ad56b0371","Type":"ContainerStarted","Data":"a8e5a2d5840840d89e128ca72b482bb35301620bbdf1f9bdfb4ce2a0348fcc09"} Feb 18 17:08:40 crc kubenswrapper[4812]: I0218 17:08:40.370387 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" podStartSLOduration=1.9317990539999998 podStartE2EDuration="2.370346802s" podCreationTimestamp="2026-02-18 17:08:38 +0000 UTC" firstStartedPulling="2026-02-18 17:08:39.373929606 +0000 UTC m=+2339.639540515" lastFinishedPulling="2026-02-18 17:08:39.812477354 +0000 UTC m=+2340.078088263" observedRunningTime="2026-02-18 17:08:40.363735886 +0000 UTC m=+2340.629346805" watchObservedRunningTime="2026-02-18 17:08:40.370346802 +0000 UTC m=+2340.635957751" Feb 18 17:08:46 crc kubenswrapper[4812]: I0218 17:08:46.508478 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:08:46 crc kubenswrapper[4812]: E0218 17:08:46.510605 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:08:47 crc kubenswrapper[4812]: I0218 17:08:47.421724 4812 generic.go:334] "Generic (PLEG): container finished" podID="65be9e89-0994-447a-a008-f08ad56b0371" containerID="9bba556e9c6440e215e9ae6c82b75e25018720d87a37393181444b37b52599aa" exitCode=0 Feb 18 17:08:47 crc kubenswrapper[4812]: I0218 17:08:47.421822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" event={"ID":"65be9e89-0994-447a-a008-f08ad56b0371","Type":"ContainerDied","Data":"9bba556e9c6440e215e9ae6c82b75e25018720d87a37393181444b37b52599aa"} Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.863356 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.895701 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory\") pod \"65be9e89-0994-447a-a008-f08ad56b0371\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.895878 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcmz4\" (UniqueName: \"kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4\") pod \"65be9e89-0994-447a-a008-f08ad56b0371\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.895997 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam\") pod \"65be9e89-0994-447a-a008-f08ad56b0371\" (UID: \"65be9e89-0994-447a-a008-f08ad56b0371\") " Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.916501 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4" (OuterVolumeSpecName: "kube-api-access-tcmz4") pod "65be9e89-0994-447a-a008-f08ad56b0371" (UID: "65be9e89-0994-447a-a008-f08ad56b0371"). InnerVolumeSpecName "kube-api-access-tcmz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.934748 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory" (OuterVolumeSpecName: "inventory") pod "65be9e89-0994-447a-a008-f08ad56b0371" (UID: "65be9e89-0994-447a-a008-f08ad56b0371"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.948560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65be9e89-0994-447a-a008-f08ad56b0371" (UID: "65be9e89-0994-447a-a008-f08ad56b0371"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.998404 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcmz4\" (UniqueName: \"kubernetes.io/projected/65be9e89-0994-447a-a008-f08ad56b0371-kube-api-access-tcmz4\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.998447 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:48 crc kubenswrapper[4812]: I0218 17:08:48.998462 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65be9e89-0994-447a-a008-f08ad56b0371-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.444453 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" event={"ID":"65be9e89-0994-447a-a008-f08ad56b0371","Type":"ContainerDied","Data":"a8e5a2d5840840d89e128ca72b482bb35301620bbdf1f9bdfb4ce2a0348fcc09"} Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.444520 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e5a2d5840840d89e128ca72b482bb35301620bbdf1f9bdfb4ce2a0348fcc09" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.444555 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-znbmb" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.542331 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz"] Feb 18 17:08:49 crc kubenswrapper[4812]: E0218 17:08:49.543429 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65be9e89-0994-447a-a008-f08ad56b0371" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.543465 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="65be9e89-0994-447a-a008-f08ad56b0371" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.543853 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="65be9e89-0994-447a-a008-f08ad56b0371" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.545328 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.550243 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.550463 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.550585 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.550815 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.557491 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz"] Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.614452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.614612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wp2l\" (UniqueName: \"kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.614640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.717761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.717906 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wp2l\" (UniqueName: \"kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.717942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.722644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.722825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.747856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wp2l\" (UniqueName: \"kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:49 crc kubenswrapper[4812]: I0218 17:08:49.896736 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:08:50 crc kubenswrapper[4812]: I0218 17:08:50.469357 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz"] Feb 18 17:08:51 crc kubenswrapper[4812]: I0218 17:08:51.464349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" event={"ID":"ff12abcc-555a-4a37-8184-8889c7e5bcd9","Type":"ContainerStarted","Data":"8530020de5a4a5e6516992cdf21e847a5ffb6da8668fd1df49a72a6ab34cae0a"} Feb 18 17:08:51 crc kubenswrapper[4812]: I0218 17:08:51.464401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" event={"ID":"ff12abcc-555a-4a37-8184-8889c7e5bcd9","Type":"ContainerStarted","Data":"ddc8416c10cdad4e9f39eb0489c81a15a31c9f5bf8af52fcb864956711e7c68f"} Feb 18 17:08:51 crc kubenswrapper[4812]: I0218 17:08:51.482819 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" podStartSLOduration=2.0847941309999998 podStartE2EDuration="2.482796144s" podCreationTimestamp="2026-02-18 17:08:49 +0000 UTC" firstStartedPulling="2026-02-18 17:08:50.479835673 +0000 UTC m=+2350.745446582" lastFinishedPulling="2026-02-18 17:08:50.877837686 +0000 UTC m=+2351.143448595" observedRunningTime="2026-02-18 17:08:51.47984128 +0000 UTC m=+2351.745452189" watchObservedRunningTime="2026-02-18 17:08:51.482796144 +0000 UTC m=+2351.748407053" Feb 18 17:08:57 crc kubenswrapper[4812]: I0218 17:08:57.508191 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:08:57 crc kubenswrapper[4812]: E0218 17:08:57.508776 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:09:00 crc kubenswrapper[4812]: I0218 17:09:00.552837 4812 generic.go:334] "Generic (PLEG): container finished" podID="ff12abcc-555a-4a37-8184-8889c7e5bcd9" containerID="8530020de5a4a5e6516992cdf21e847a5ffb6da8668fd1df49a72a6ab34cae0a" exitCode=0 Feb 18 17:09:00 crc kubenswrapper[4812]: I0218 17:09:00.552952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" event={"ID":"ff12abcc-555a-4a37-8184-8889c7e5bcd9","Type":"ContainerDied","Data":"8530020de5a4a5e6516992cdf21e847a5ffb6da8668fd1df49a72a6ab34cae0a"} Feb 18 17:09:01 crc kubenswrapper[4812]: I0218 17:09:01.999028 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.087048 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wp2l\" (UniqueName: \"kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l\") pod \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.087160 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam\") pod \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.087290 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory\") pod \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\" (UID: \"ff12abcc-555a-4a37-8184-8889c7e5bcd9\") " Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.101577 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l" (OuterVolumeSpecName: "kube-api-access-8wp2l") pod "ff12abcc-555a-4a37-8184-8889c7e5bcd9" (UID: "ff12abcc-555a-4a37-8184-8889c7e5bcd9"). InnerVolumeSpecName "kube-api-access-8wp2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.125213 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff12abcc-555a-4a37-8184-8889c7e5bcd9" (UID: "ff12abcc-555a-4a37-8184-8889c7e5bcd9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.133996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory" (OuterVolumeSpecName: "inventory") pod "ff12abcc-555a-4a37-8184-8889c7e5bcd9" (UID: "ff12abcc-555a-4a37-8184-8889c7e5bcd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.190072 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wp2l\" (UniqueName: \"kubernetes.io/projected/ff12abcc-555a-4a37-8184-8889c7e5bcd9-kube-api-access-8wp2l\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.190125 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.190140 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff12abcc-555a-4a37-8184-8889c7e5bcd9-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.574424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" event={"ID":"ff12abcc-555a-4a37-8184-8889c7e5bcd9","Type":"ContainerDied","Data":"ddc8416c10cdad4e9f39eb0489c81a15a31c9f5bf8af52fcb864956711e7c68f"} Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.574481 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc8416c10cdad4e9f39eb0489c81a15a31c9f5bf8af52fcb864956711e7c68f" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.574519 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.701165 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg"] Feb 18 17:09:02 crc kubenswrapper[4812]: E0218 17:09:02.701679 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff12abcc-555a-4a37-8184-8889c7e5bcd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.701701 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff12abcc-555a-4a37-8184-8889c7e5bcd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.701919 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff12abcc-555a-4a37-8184-8889c7e5bcd9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.702814 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.708260 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.708328 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.708396 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.708968 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.709800 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.711781 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.711998 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.712390 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.720523 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg"] Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807725 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807745 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807767 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807797 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.807815 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvxm\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.808128 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.808325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.808407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.808984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.809230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.809354 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.809400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.910730 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.910795 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911575 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911700 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvxm\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911758 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911799 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.911962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.915811 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.915870 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.915998 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.918363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.918737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.918903 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.920256 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.921128 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.921444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.921541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.921964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.922930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.923723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:02 crc kubenswrapper[4812]: I0218 17:09:02.937388 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvxm\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jsszg\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:03 crc kubenswrapper[4812]: I0218 17:09:03.028886 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:03 crc kubenswrapper[4812]: I0218 17:09:03.603564 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg"] Feb 18 17:09:04 crc kubenswrapper[4812]: I0218 17:09:04.602196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" event={"ID":"86b35ff7-e786-4747-877e-c60c4dd3f626","Type":"ContainerStarted","Data":"49bfef3d638df2d25f77c550eb18b6cf9155f77978568c06a9220eace75266c6"} Feb 18 17:09:04 crc kubenswrapper[4812]: I0218 17:09:04.602563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" event={"ID":"86b35ff7-e786-4747-877e-c60c4dd3f626","Type":"ContainerStarted","Data":"f3a7c37510e0afd79dc640b24ecc3b69cb76974a51aaa25a9fd68b8873b2a1fb"} Feb 18 17:09:04 crc kubenswrapper[4812]: I0218 17:09:04.645286 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" podStartSLOduration=2.167482741 podStartE2EDuration="2.645242841s" podCreationTimestamp="2026-02-18 17:09:02 +0000 UTC" firstStartedPulling="2026-02-18 17:09:03.599547831 +0000 UTC m=+2363.865158740" lastFinishedPulling="2026-02-18 17:09:04.077307881 +0000 UTC m=+2364.342918840" observedRunningTime="2026-02-18 17:09:04.628948713 +0000 UTC m=+2364.894559672" watchObservedRunningTime="2026-02-18 17:09:04.645242841 +0000 UTC m=+2364.910853800" Feb 18 17:09:09 crc kubenswrapper[4812]: I0218 17:09:09.508962 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:09:09 crc kubenswrapper[4812]: E0218 17:09:09.510575 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:09:23 crc kubenswrapper[4812]: I0218 17:09:23.508702 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:09:23 crc kubenswrapper[4812]: E0218 17:09:23.510008 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:09:37 crc kubenswrapper[4812]: I0218 17:09:37.508011 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:09:37 crc kubenswrapper[4812]: E0218 17:09:37.508772 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:09:39 crc kubenswrapper[4812]: I0218 17:09:39.961223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" event={"ID":"86b35ff7-e786-4747-877e-c60c4dd3f626","Type":"ContainerDied","Data":"49bfef3d638df2d25f77c550eb18b6cf9155f77978568c06a9220eace75266c6"} Feb 18 17:09:39 crc kubenswrapper[4812]: I0218 17:09:39.961228 4812 generic.go:334] "Generic (PLEG): container finished" podID="86b35ff7-e786-4747-877e-c60c4dd3f626" containerID="49bfef3d638df2d25f77c550eb18b6cf9155f77978568c06a9220eace75266c6" exitCode=0 Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.417316 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472028 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472504 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.472902 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473253 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473361 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473478 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473596 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.473957 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.474141 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvxm\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm\") pod \"86b35ff7-e786-4747-877e-c60c4dd3f626\" (UID: \"86b35ff7-e786-4747-877e-c60c4dd3f626\") " Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.488542 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm" (OuterVolumeSpecName: "kube-api-access-zsvxm") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "kube-api-access-zsvxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.488643 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.489225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.489471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.489687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.489774 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.489857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.491620 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.491679 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.493130 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.493924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.495500 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.516020 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory" (OuterVolumeSpecName: "inventory") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.530452 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86b35ff7-e786-4747-877e-c60c4dd3f626" (UID: "86b35ff7-e786-4747-877e-c60c4dd3f626"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.577984 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578077 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsvxm\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-kube-api-access-zsvxm\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578122 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578135 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578148 4812 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578164 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578194 4812 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578212 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578223 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578233 4812 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578280 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578312 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578324 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/86b35ff7-e786-4747-877e-c60c4dd3f626-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.578335 4812 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b35ff7-e786-4747-877e-c60c4dd3f626-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.986298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" event={"ID":"86b35ff7-e786-4747-877e-c60c4dd3f626","Type":"ContainerDied","Data":"f3a7c37510e0afd79dc640b24ecc3b69cb76974a51aaa25a9fd68b8873b2a1fb"} Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.986627 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a7c37510e0afd79dc640b24ecc3b69cb76974a51aaa25a9fd68b8873b2a1fb" Feb 18 17:09:41 crc kubenswrapper[4812]: I0218 17:09:41.986366 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jsszg" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.092473 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582"] Feb 18 17:09:42 crc kubenswrapper[4812]: E0218 17:09:42.092932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b35ff7-e786-4747-877e-c60c4dd3f626" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.092953 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b35ff7-e786-4747-877e-c60c4dd3f626" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.093166 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b35ff7-e786-4747-877e-c60c4dd3f626" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.093893 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.096873 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.097120 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.097187 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.097140 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.100292 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.106133 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582"] Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.189740 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.189969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.190088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.190228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.190299 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cf8w\" (UniqueName: \"kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.292002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.292147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.292176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cf8w\" (UniqueName: \"kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.292334 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.292369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.293717 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.297590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.298222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.300042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.316060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cf8w\" (UniqueName: \"kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-95582\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.411128 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:09:42 crc kubenswrapper[4812]: I0218 17:09:42.988464 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582"] Feb 18 17:09:44 crc kubenswrapper[4812]: I0218 17:09:44.034038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" event={"ID":"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b","Type":"ContainerStarted","Data":"9ba9a62106c5f29662e9d709307cd7573316de341c6d5d93aa3edbe8ffdeb42c"} Feb 18 17:09:45 crc kubenswrapper[4812]: I0218 17:09:45.043043 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" event={"ID":"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b","Type":"ContainerStarted","Data":"fc91022af93dcfa92f3da7cc49403d51b4b72a9d0f21ef96d08c31a5fe3e9cb9"} Feb 18 17:09:45 crc kubenswrapper[4812]: I0218 17:09:45.075656 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" podStartSLOduration=2.251479351 podStartE2EDuration="3.07563612s" podCreationTimestamp="2026-02-18 17:09:42 +0000 UTC" firstStartedPulling="2026-02-18 17:09:42.997493041 +0000 UTC m=+2403.263103960" lastFinishedPulling="2026-02-18 17:09:43.8216498 +0000 UTC m=+2404.087260729" observedRunningTime="2026-02-18 17:09:45.06804171 +0000 UTC m=+2405.333652629" watchObservedRunningTime="2026-02-18 17:09:45.07563612 +0000 UTC m=+2405.341247029" Feb 18 17:09:50 crc kubenswrapper[4812]: I0218 17:09:50.514142 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:09:50 crc kubenswrapper[4812]: E0218 17:09:50.514922 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:10:05 crc kubenswrapper[4812]: I0218 17:10:05.508222 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:10:05 crc kubenswrapper[4812]: E0218 17:10:05.508924 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:10:16 crc kubenswrapper[4812]: I0218 17:10:16.510340 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:10:16 crc kubenswrapper[4812]: E0218 17:10:16.511334 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:10:29 crc kubenswrapper[4812]: I0218 17:10:29.508828 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:10:29 crc kubenswrapper[4812]: E0218 17:10:29.510310 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:10:42 crc kubenswrapper[4812]: I0218 17:10:42.508725 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:10:42 crc kubenswrapper[4812]: E0218 17:10:42.509528 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:10:43 crc kubenswrapper[4812]: I0218 17:10:43.619287 4812 generic.go:334] "Generic (PLEG): container finished" podID="8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" containerID="fc91022af93dcfa92f3da7cc49403d51b4b72a9d0f21ef96d08c31a5fe3e9cb9" exitCode=0 Feb 18 17:10:43 crc kubenswrapper[4812]: I0218 17:10:43.619389 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" event={"ID":"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b","Type":"ContainerDied","Data":"fc91022af93dcfa92f3da7cc49403d51b4b72a9d0f21ef96d08c31a5fe3e9cb9"} Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.093058 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.253997 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory\") pod \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.254472 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam\") pod \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.254510 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0\") pod \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.254544 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cf8w\" (UniqueName: \"kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w\") pod \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.254711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle\") pod \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\" (UID: \"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b\") " Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.262836 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w" (OuterVolumeSpecName: "kube-api-access-4cf8w") pod "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" (UID: "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b"). InnerVolumeSpecName "kube-api-access-4cf8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.268062 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" (UID: "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.285706 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" (UID: "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.286634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" (UID: "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.294283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory" (OuterVolumeSpecName: "inventory") pod "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" (UID: "8b2bfdae-9a0f-4740-96f7-f51e1db54c6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.357779 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.357854 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.357877 4812 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.357890 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cf8w\" (UniqueName: \"kubernetes.io/projected/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-kube-api-access-4cf8w\") on node \"crc\" DevicePath \"\"" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.357935 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2bfdae-9a0f-4740-96f7-f51e1db54c6b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.644220 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" event={"ID":"8b2bfdae-9a0f-4740-96f7-f51e1db54c6b","Type":"ContainerDied","Data":"9ba9a62106c5f29662e9d709307cd7573316de341c6d5d93aa3edbe8ffdeb42c"} Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.644332 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba9a62106c5f29662e9d709307cd7573316de341c6d5d93aa3edbe8ffdeb42c" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.644273 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-95582" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.791848 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7"] Feb 18 17:10:45 crc kubenswrapper[4812]: E0218 17:10:45.792732 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.792760 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.793023 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2bfdae-9a0f-4740-96f7-f51e1db54c6b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.794167 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.797898 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.797994 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.798224 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.797898 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.798476 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.802315 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.821762 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7"] Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.870981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.871144 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.871269 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.871311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.871342 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2j5d\" (UniqueName: \"kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.871673 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2j5d\" (UniqueName: \"kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.973449 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.979379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.979379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.980337 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.981985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.982202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:45 crc kubenswrapper[4812]: I0218 17:10:45.992951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2j5d\" (UniqueName: \"kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:46 crc kubenswrapper[4812]: I0218 17:10:46.128923 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:10:46 crc kubenswrapper[4812]: I0218 17:10:46.736256 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7"] Feb 18 17:10:47 crc kubenswrapper[4812]: I0218 17:10:47.666193 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" event={"ID":"d063dbe4-2200-4a71-b1d1-55fa4bc36f63","Type":"ContainerStarted","Data":"2baf4cae71d991536ef9a89100f1bf70c7f9ea98dc06e2ecf397e16fedbacdf0"} Feb 18 17:10:47 crc kubenswrapper[4812]: I0218 17:10:47.666650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" event={"ID":"d063dbe4-2200-4a71-b1d1-55fa4bc36f63","Type":"ContainerStarted","Data":"d9926a575da5c1f7d015820a2dc1d12a77a101c77403e372812b1ab69d37d7c8"} Feb 18 17:10:47 crc kubenswrapper[4812]: I0218 17:10:47.704786 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" podStartSLOduration=2.05389665 podStartE2EDuration="2.704755698s" podCreationTimestamp="2026-02-18 17:10:45 +0000 UTC" firstStartedPulling="2026-02-18 17:10:46.738903257 +0000 UTC m=+2467.004514166" lastFinishedPulling="2026-02-18 17:10:47.389762305 +0000 UTC m=+2467.655373214" observedRunningTime="2026-02-18 17:10:47.688571492 +0000 UTC m=+2467.954182431" watchObservedRunningTime="2026-02-18 17:10:47.704755698 +0000 UTC m=+2467.970366617" Feb 18 17:10:56 crc kubenswrapper[4812]: I0218 17:10:56.509360 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:10:56 crc kubenswrapper[4812]: E0218 17:10:56.510029 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:11:07 crc kubenswrapper[4812]: I0218 17:11:07.508733 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:11:07 crc kubenswrapper[4812]: E0218 17:11:07.509446 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:11:21 crc kubenswrapper[4812]: I0218 17:11:21.508202 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:11:21 crc kubenswrapper[4812]: E0218 17:11:21.509017 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:11:34 crc kubenswrapper[4812]: I0218 17:11:34.122297 4812 generic.go:334] "Generic (PLEG): container finished" podID="d063dbe4-2200-4a71-b1d1-55fa4bc36f63" containerID="2baf4cae71d991536ef9a89100f1bf70c7f9ea98dc06e2ecf397e16fedbacdf0" exitCode=0 Feb 18 17:11:34 crc kubenswrapper[4812]: I0218 17:11:34.122402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" event={"ID":"d063dbe4-2200-4a71-b1d1-55fa4bc36f63","Type":"ContainerDied","Data":"2baf4cae71d991536ef9a89100f1bf70c7f9ea98dc06e2ecf397e16fedbacdf0"} Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.509144 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:11:35 crc kubenswrapper[4812]: E0218 17:11:35.509967 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.538083 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563403 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563585 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563620 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563675 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.563824 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2j5d\" (UniqueName: \"kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d\") pod \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\" (UID: \"d063dbe4-2200-4a71-b1d1-55fa4bc36f63\") " Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.571051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.571141 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d" (OuterVolumeSpecName: "kube-api-access-j2j5d") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "kube-api-access-j2j5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.595439 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.598486 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.613083 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory" (OuterVolumeSpecName: "inventory") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.614841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d063dbe4-2200-4a71-b1d1-55fa4bc36f63" (UID: "d063dbe4-2200-4a71-b1d1-55fa4bc36f63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667254 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667311 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667325 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667337 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667349 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2j5d\" (UniqueName: \"kubernetes.io/projected/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-kube-api-access-j2j5d\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:35 crc kubenswrapper[4812]: I0218 17:11:35.667360 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d063dbe4-2200-4a71-b1d1-55fa4bc36f63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.143730 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" event={"ID":"d063dbe4-2200-4a71-b1d1-55fa4bc36f63","Type":"ContainerDied","Data":"d9926a575da5c1f7d015820a2dc1d12a77a101c77403e372812b1ab69d37d7c8"} Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.143795 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9926a575da5c1f7d015820a2dc1d12a77a101c77403e372812b1ab69d37d7c8" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.143792 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.245938 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln"] Feb 18 17:11:36 crc kubenswrapper[4812]: E0218 17:11:36.246753 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d063dbe4-2200-4a71-b1d1-55fa4bc36f63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.246833 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d063dbe4-2200-4a71-b1d1-55fa4bc36f63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.247140 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d063dbe4-2200-4a71-b1d1-55fa4bc36f63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.248221 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.251982 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.252237 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.252506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.252687 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.263266 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.277307 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln"] Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.385570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.385702 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.385825 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zcd\" (UniqueName: \"kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.385910 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.385951 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.487935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.488036 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.488155 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8zcd\" (UniqueName: \"kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.488223 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.488245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.491821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.492203 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.493778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.505769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.514313 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8zcd\" (UniqueName: \"kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:36 crc kubenswrapper[4812]: I0218 17:11:36.575581 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:11:37 crc kubenswrapper[4812]: I0218 17:11:37.102227 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln"] Feb 18 17:11:37 crc kubenswrapper[4812]: I0218 17:11:37.155285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" event={"ID":"d7659da5-6aa3-4372-94fb-12a2a30f7d24","Type":"ContainerStarted","Data":"033aeeaa761de3728ee4a44d18b868b6adcb4f54c8d6485231051381d5a4c306"} Feb 18 17:11:38 crc kubenswrapper[4812]: I0218 17:11:38.165047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" event={"ID":"d7659da5-6aa3-4372-94fb-12a2a30f7d24","Type":"ContainerStarted","Data":"c14d87939e8538364c0726236a1cc36c0ad7033920340a3a4150b4560a1c1dfb"} Feb 18 17:11:38 crc kubenswrapper[4812]: I0218 17:11:38.181213 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" podStartSLOduration=1.722142056 podStartE2EDuration="2.181185668s" podCreationTimestamp="2026-02-18 17:11:36 +0000 UTC" firstStartedPulling="2026-02-18 17:11:37.109743073 +0000 UTC m=+2517.375353982" lastFinishedPulling="2026-02-18 17:11:37.568786675 +0000 UTC m=+2517.834397594" observedRunningTime="2026-02-18 17:11:38.179887086 +0000 UTC m=+2518.445498015" watchObservedRunningTime="2026-02-18 17:11:38.181185668 +0000 UTC m=+2518.446796577" Feb 18 17:11:50 crc kubenswrapper[4812]: I0218 17:11:50.515340 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:11:50 crc kubenswrapper[4812]: E0218 17:11:50.516681 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:12:05 crc kubenswrapper[4812]: I0218 17:12:05.507964 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:12:06 crc kubenswrapper[4812]: I0218 17:12:06.447198 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776"} Feb 18 17:14:33 crc kubenswrapper[4812]: I0218 17:14:33.414568 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:14:33 crc kubenswrapper[4812]: I0218 17:14:33.415288 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.871111 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.877039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.890796 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.960991 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.961055 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvh22\" (UniqueName: \"kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:55 crc kubenswrapper[4812]: I0218 17:14:55.961183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.063318 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.063400 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvh22\" (UniqueName: \"kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.063461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.063902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.063936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.084727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvh22\" (UniqueName: \"kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22\") pod \"community-operators-7mwg2\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.203554 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:14:56 crc kubenswrapper[4812]: I0218 17:14:56.685513 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:14:57 crc kubenswrapper[4812]: I0218 17:14:57.269850 4812 generic.go:334] "Generic (PLEG): container finished" podID="57090858-bf69-4841-9e81-d8a0c4892c20" containerID="df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d" exitCode=0 Feb 18 17:14:57 crc kubenswrapper[4812]: I0218 17:14:57.271612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerDied","Data":"df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d"} Feb 18 17:14:57 crc kubenswrapper[4812]: I0218 17:14:57.271677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerStarted","Data":"469f6c1cd1234ba19a2930dfb97debfc8c11364f88288e2a2e6a22d6aec6b87d"} Feb 18 17:14:57 crc kubenswrapper[4812]: I0218 17:14:57.275064 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.147941 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm"] Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.149977 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.153969 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.160138 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.160987 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm"] Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.253193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zl6n\" (UniqueName: \"kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.253283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.253340 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.308850 4812 generic.go:334] "Generic (PLEG): container finished" podID="57090858-bf69-4841-9e81-d8a0c4892c20" containerID="6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970" exitCode=0 Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.308927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerDied","Data":"6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970"} Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.355652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zl6n\" (UniqueName: \"kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.355769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.355807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.356715 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.365886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.373809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zl6n\" (UniqueName: \"kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n\") pod \"collect-profiles-29523915-vrclm\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.484233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:00 crc kubenswrapper[4812]: I0218 17:15:00.908619 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm"] Feb 18 17:15:01 crc kubenswrapper[4812]: I0218 17:15:01.324771 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" event={"ID":"76251634-ff4b-4bbe-a040-05f7b8118ec4","Type":"ContainerStarted","Data":"bfb122ec8d3d471c23f35fed2d5a3ce5c84c0826faf8d8e06da37e47e9f3e803"} Feb 18 17:15:01 crc kubenswrapper[4812]: I0218 17:15:01.325396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" event={"ID":"76251634-ff4b-4bbe-a040-05f7b8118ec4","Type":"ContainerStarted","Data":"e9c2de5f1d8db3887e1babe4a3eb37ec8847f07334751d11e4a1b04e6b238c68"} Feb 18 17:15:02 crc kubenswrapper[4812]: I0218 17:15:02.338251 4812 generic.go:334] "Generic (PLEG): container finished" podID="76251634-ff4b-4bbe-a040-05f7b8118ec4" containerID="bfb122ec8d3d471c23f35fed2d5a3ce5c84c0826faf8d8e06da37e47e9f3e803" exitCode=0 Feb 18 17:15:02 crc kubenswrapper[4812]: I0218 17:15:02.338338 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" event={"ID":"76251634-ff4b-4bbe-a040-05f7b8118ec4","Type":"ContainerDied","Data":"bfb122ec8d3d471c23f35fed2d5a3ce5c84c0826faf8d8e06da37e47e9f3e803"} Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.353081 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerStarted","Data":"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e"} Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.379776 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mwg2" podStartSLOduration=3.602443234 podStartE2EDuration="8.379747805s" podCreationTimestamp="2026-02-18 17:14:55 +0000 UTC" firstStartedPulling="2026-02-18 17:14:57.272374557 +0000 UTC m=+2717.537985466" lastFinishedPulling="2026-02-18 17:15:02.049679128 +0000 UTC m=+2722.315290037" observedRunningTime="2026-02-18 17:15:03.373041797 +0000 UTC m=+2723.638652716" watchObservedRunningTime="2026-02-18 17:15:03.379747805 +0000 UTC m=+2723.645358714" Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.413729 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.413801 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.747506 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.937935 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zl6n\" (UniqueName: \"kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n\") pod \"76251634-ff4b-4bbe-a040-05f7b8118ec4\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.938610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume\") pod \"76251634-ff4b-4bbe-a040-05f7b8118ec4\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.939057 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume\") pod \"76251634-ff4b-4bbe-a040-05f7b8118ec4\" (UID: \"76251634-ff4b-4bbe-a040-05f7b8118ec4\") " Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.939560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume" (OuterVolumeSpecName: "config-volume") pod "76251634-ff4b-4bbe-a040-05f7b8118ec4" (UID: "76251634-ff4b-4bbe-a040-05f7b8118ec4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.943136 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n" (OuterVolumeSpecName: "kube-api-access-8zl6n") pod "76251634-ff4b-4bbe-a040-05f7b8118ec4" (UID: "76251634-ff4b-4bbe-a040-05f7b8118ec4"). InnerVolumeSpecName "kube-api-access-8zl6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:15:03 crc kubenswrapper[4812]: I0218 17:15:03.944649 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76251634-ff4b-4bbe-a040-05f7b8118ec4" (UID: "76251634-ff4b-4bbe-a040-05f7b8118ec4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.041506 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zl6n\" (UniqueName: \"kubernetes.io/projected/76251634-ff4b-4bbe-a040-05f7b8118ec4-kube-api-access-8zl6n\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.041573 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76251634-ff4b-4bbe-a040-05f7b8118ec4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.041587 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76251634-ff4b-4bbe-a040-05f7b8118ec4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.370574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" event={"ID":"76251634-ff4b-4bbe-a040-05f7b8118ec4","Type":"ContainerDied","Data":"e9c2de5f1d8db3887e1babe4a3eb37ec8847f07334751d11e4a1b04e6b238c68"} Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.370614 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.370644 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9c2de5f1d8db3887e1babe4a3eb37ec8847f07334751d11e4a1b04e6b238c68" Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.826427 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4"] Feb 18 17:15:04 crc kubenswrapper[4812]: I0218 17:15:04.835450 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523870-4nfr4"] Feb 18 17:15:06 crc kubenswrapper[4812]: I0218 17:15:06.203712 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:06 crc kubenswrapper[4812]: I0218 17:15:06.204038 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:06 crc kubenswrapper[4812]: I0218 17:15:06.249868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:06 crc kubenswrapper[4812]: I0218 17:15:06.526313 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce646036-070b-4e97-bce1-afff187c3c83" path="/var/lib/kubelet/pods/ce646036-070b-4e97-bce1-afff187c3c83/volumes" Feb 18 17:15:15 crc kubenswrapper[4812]: I0218 17:15:15.811845 4812 scope.go:117] "RemoveContainer" containerID="ec3d4718c91aee07b23a565bb1b54938619c43b78b2bd36ad7739f697881b2c6" Feb 18 17:15:16 crc kubenswrapper[4812]: I0218 17:15:16.247893 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:16 crc kubenswrapper[4812]: I0218 17:15:16.299013 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:15:16 crc kubenswrapper[4812]: I0218 17:15:16.517787 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mwg2" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="registry-server" containerID="cri-o://2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e" gracePeriod=2 Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.020266 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.043995 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities\") pod \"57090858-bf69-4841-9e81-d8a0c4892c20\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.044070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvh22\" (UniqueName: \"kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22\") pod \"57090858-bf69-4841-9e81-d8a0c4892c20\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.044298 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content\") pod \"57090858-bf69-4841-9e81-d8a0c4892c20\" (UID: \"57090858-bf69-4841-9e81-d8a0c4892c20\") " Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.045003 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities" (OuterVolumeSpecName: "utilities") pod "57090858-bf69-4841-9e81-d8a0c4892c20" (UID: "57090858-bf69-4841-9e81-d8a0c4892c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.052281 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22" (OuterVolumeSpecName: "kube-api-access-cvh22") pod "57090858-bf69-4841-9e81-d8a0c4892c20" (UID: "57090858-bf69-4841-9e81-d8a0c4892c20"). InnerVolumeSpecName "kube-api-access-cvh22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.098868 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57090858-bf69-4841-9e81-d8a0c4892c20" (UID: "57090858-bf69-4841-9e81-d8a0c4892c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.146182 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.146217 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57090858-bf69-4841-9e81-d8a0c4892c20-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.146228 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvh22\" (UniqueName: \"kubernetes.io/projected/57090858-bf69-4841-9e81-d8a0c4892c20-kube-api-access-cvh22\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.530174 4812 generic.go:334] "Generic (PLEG): container finished" podID="57090858-bf69-4841-9e81-d8a0c4892c20" containerID="2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e" exitCode=0 Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.530214 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerDied","Data":"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e"} Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.530244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mwg2" event={"ID":"57090858-bf69-4841-9e81-d8a0c4892c20","Type":"ContainerDied","Data":"469f6c1cd1234ba19a2930dfb97debfc8c11364f88288e2a2e6a22d6aec6b87d"} Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.530260 4812 scope.go:117] "RemoveContainer" containerID="2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.530394 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mwg2" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.558976 4812 scope.go:117] "RemoveContainer" containerID="6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.561507 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.571190 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mwg2"] Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.588498 4812 scope.go:117] "RemoveContainer" containerID="df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.623762 4812 scope.go:117] "RemoveContainer" containerID="2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e" Feb 18 17:15:17 crc kubenswrapper[4812]: E0218 17:15:17.624131 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e\": container with ID starting with 2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e not found: ID does not exist" containerID="2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.624263 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e"} err="failed to get container status \"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e\": rpc error: code = NotFound desc = could not find container \"2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e\": container with ID starting with 2597e81256d17b06beabb3ae709ebf4c73fee81a2317d5785a6b845c182c3d4e not found: ID does not exist" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.624346 4812 scope.go:117] "RemoveContainer" containerID="6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970" Feb 18 17:15:17 crc kubenswrapper[4812]: E0218 17:15:17.624675 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970\": container with ID starting with 6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970 not found: ID does not exist" containerID="6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.624705 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970"} err="failed to get container status \"6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970\": rpc error: code = NotFound desc = could not find container \"6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970\": container with ID starting with 6f3c73bbdd42b5bb074dedbd8ad7b61a0ee197ab9446fa686bd509e1fa700970 not found: ID does not exist" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.624720 4812 scope.go:117] "RemoveContainer" containerID="df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d" Feb 18 17:15:17 crc kubenswrapper[4812]: E0218 17:15:17.625079 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d\": container with ID starting with df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d not found: ID does not exist" containerID="df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d" Feb 18 17:15:17 crc kubenswrapper[4812]: I0218 17:15:17.625117 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d"} err="failed to get container status \"df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d\": rpc error: code = NotFound desc = could not find container \"df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d\": container with ID starting with df16712a2ba5bfd15c007ea0dfd5d085314289809ff99d62927c40deb461390d not found: ID does not exist" Feb 18 17:15:18 crc kubenswrapper[4812]: I0218 17:15:18.527300 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" path="/var/lib/kubelet/pods/57090858-bf69-4841-9e81-d8a0c4892c20/volumes" Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.413736 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.416249 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.416493 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.417901 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.418247 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776" gracePeriod=600 Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.673670 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776" exitCode=0 Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.673792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776"} Feb 18 17:15:33 crc kubenswrapper[4812]: I0218 17:15:33.673945 4812 scope.go:117] "RemoveContainer" containerID="2be2e3df023e227c32914c77d4792c03f2d2f7a3190a3fc31106b3c3c627415c" Feb 18 17:15:34 crc kubenswrapper[4812]: I0218 17:15:34.694504 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086"} Feb 18 17:15:50 crc kubenswrapper[4812]: I0218 17:15:50.889921 4812 generic.go:334] "Generic (PLEG): container finished" podID="d7659da5-6aa3-4372-94fb-12a2a30f7d24" containerID="c14d87939e8538364c0726236a1cc36c0ad7033920340a3a4150b4560a1c1dfb" exitCode=0 Feb 18 17:15:50 crc kubenswrapper[4812]: I0218 17:15:50.889998 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" event={"ID":"d7659da5-6aa3-4372-94fb-12a2a30f7d24","Type":"ContainerDied","Data":"c14d87939e8538364c0726236a1cc36c0ad7033920340a3a4150b4560a1c1dfb"} Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.357021 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.423749 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0\") pod \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.424516 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory\") pod \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.424584 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8zcd\" (UniqueName: \"kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd\") pod \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.424668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle\") pod \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.424945 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam\") pod \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\" (UID: \"d7659da5-6aa3-4372-94fb-12a2a30f7d24\") " Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.445154 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd" (OuterVolumeSpecName: "kube-api-access-l8zcd") pod "d7659da5-6aa3-4372-94fb-12a2a30f7d24" (UID: "d7659da5-6aa3-4372-94fb-12a2a30f7d24"). InnerVolumeSpecName "kube-api-access-l8zcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.447378 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d7659da5-6aa3-4372-94fb-12a2a30f7d24" (UID: "d7659da5-6aa3-4372-94fb-12a2a30f7d24"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.451639 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory" (OuterVolumeSpecName: "inventory") pod "d7659da5-6aa3-4372-94fb-12a2a30f7d24" (UID: "d7659da5-6aa3-4372-94fb-12a2a30f7d24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.460657 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d7659da5-6aa3-4372-94fb-12a2a30f7d24" (UID: "d7659da5-6aa3-4372-94fb-12a2a30f7d24"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.468064 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7659da5-6aa3-4372-94fb-12a2a30f7d24" (UID: "d7659da5-6aa3-4372-94fb-12a2a30f7d24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.527757 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.527831 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.527844 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.527853 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8zcd\" (UniqueName: \"kubernetes.io/projected/d7659da5-6aa3-4372-94fb-12a2a30f7d24-kube-api-access-l8zcd\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.527866 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7659da5-6aa3-4372-94fb-12a2a30f7d24-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.910566 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" event={"ID":"d7659da5-6aa3-4372-94fb-12a2a30f7d24","Type":"ContainerDied","Data":"033aeeaa761de3728ee4a44d18b868b6adcb4f54c8d6485231051381d5a4c306"} Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.911028 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033aeeaa761de3728ee4a44d18b868b6adcb4f54c8d6485231051381d5a4c306" Feb 18 17:15:52 crc kubenswrapper[4812]: I0218 17:15:52.910998 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017059 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk"] Feb 18 17:15:53 crc kubenswrapper[4812]: E0218 17:15:53.017490 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="extract-utilities" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017507 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="extract-utilities" Feb 18 17:15:53 crc kubenswrapper[4812]: E0218 17:15:53.017526 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="extract-content" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017533 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="extract-content" Feb 18 17:15:53 crc kubenswrapper[4812]: E0218 17:15:53.017545 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="registry-server" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017552 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="registry-server" Feb 18 17:15:53 crc kubenswrapper[4812]: E0218 17:15:53.017573 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76251634-ff4b-4bbe-a040-05f7b8118ec4" containerName="collect-profiles" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017578 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="76251634-ff4b-4bbe-a040-05f7b8118ec4" containerName="collect-profiles" Feb 18 17:15:53 crc kubenswrapper[4812]: E0218 17:15:53.017591 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7659da5-6aa3-4372-94fb-12a2a30f7d24" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017599 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7659da5-6aa3-4372-94fb-12a2a30f7d24" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017763 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="57090858-bf69-4841-9e81-d8a0c4892c20" containerName="registry-server" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017774 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7659da5-6aa3-4372-94fb-12a2a30f7d24" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.017796 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="76251634-ff4b-4bbe-a040-05f7b8118ec4" containerName="collect-profiles" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.018766 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.021658 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.022325 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.022685 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.022750 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.023091 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.032848 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk"] Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.035319 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.035433 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.136922 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137048 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137124 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlmx\" (UniqueName: \"kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137162 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.137441 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.239328 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.239573 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.239668 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.239763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.239957 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.240499 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.240705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.240852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.240983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.241163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.241280 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlmx\" (UniqueName: \"kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.242590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.243202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.243523 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.243765 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.244405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.245090 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.245533 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.245904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.249627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.251312 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.261797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlmx\" (UniqueName: \"kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gr8qk\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:53 crc kubenswrapper[4812]: I0218 17:15:53.341292 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:15:54 crc kubenswrapper[4812]: I0218 17:15:53.902057 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk"] Feb 18 17:15:54 crc kubenswrapper[4812]: I0218 17:15:53.921974 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" event={"ID":"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0","Type":"ContainerStarted","Data":"04690a1642b83eb5764743715a2fc33271f421848e301ab39009c1ba40958650"} Feb 18 17:15:54 crc kubenswrapper[4812]: I0218 17:15:54.935823 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" event={"ID":"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0","Type":"ContainerStarted","Data":"4fa269c6972991009c2881a8cc97208ebeb1633129bd057b6a2ce68a7327d97f"} Feb 18 17:15:54 crc kubenswrapper[4812]: I0218 17:15:54.959254 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" podStartSLOduration=2.532508142 podStartE2EDuration="2.959225774s" podCreationTimestamp="2026-02-18 17:15:52 +0000 UTC" firstStartedPulling="2026-02-18 17:15:53.909241576 +0000 UTC m=+2774.174852485" lastFinishedPulling="2026-02-18 17:15:54.335959218 +0000 UTC m=+2774.601570117" observedRunningTime="2026-02-18 17:15:54.955530742 +0000 UTC m=+2775.221141651" watchObservedRunningTime="2026-02-18 17:15:54.959225774 +0000 UTC m=+2775.224836723" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.606852 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.611727 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.620347 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.664275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.664328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grn5\" (UniqueName: \"kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.664433 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.766480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.766704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.766782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grn5\" (UniqueName: \"kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.767390 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.767586 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.788837 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grn5\" (UniqueName: \"kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5\") pod \"certified-operators-vlhvd\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:17 crc kubenswrapper[4812]: I0218 17:16:17.937786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:18 crc kubenswrapper[4812]: I0218 17:16:18.518749 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:19 crc kubenswrapper[4812]: I0218 17:16:19.184677 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ff39385-22d9-476b-b209-5dff36050428" containerID="b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b" exitCode=0 Feb 18 17:16:19 crc kubenswrapper[4812]: I0218 17:16:19.184750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerDied","Data":"b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b"} Feb 18 17:16:19 crc kubenswrapper[4812]: I0218 17:16:19.185132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerStarted","Data":"f2044b644d83fcfb4a66677cb43cf9693f1f03cb33a483e147f96266f34de3d1"} Feb 18 17:16:21 crc kubenswrapper[4812]: I0218 17:16:21.211335 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ff39385-22d9-476b-b209-5dff36050428" containerID="02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e" exitCode=0 Feb 18 17:16:21 crc kubenswrapper[4812]: I0218 17:16:21.211414 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerDied","Data":"02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e"} Feb 18 17:16:22 crc kubenswrapper[4812]: I0218 17:16:22.227703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerStarted","Data":"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1"} Feb 18 17:16:22 crc kubenswrapper[4812]: I0218 17:16:22.256134 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlhvd" podStartSLOduration=2.775856498 podStartE2EDuration="5.256057404s" podCreationTimestamp="2026-02-18 17:16:17 +0000 UTC" firstStartedPulling="2026-02-18 17:16:19.189364353 +0000 UTC m=+2799.454975272" lastFinishedPulling="2026-02-18 17:16:21.669565279 +0000 UTC m=+2801.935176178" observedRunningTime="2026-02-18 17:16:22.249457509 +0000 UTC m=+2802.515068438" watchObservedRunningTime="2026-02-18 17:16:22.256057404 +0000 UTC m=+2802.521668313" Feb 18 17:16:27 crc kubenswrapper[4812]: I0218 17:16:27.938193 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:27 crc kubenswrapper[4812]: I0218 17:16:27.938571 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:27 crc kubenswrapper[4812]: I0218 17:16:27.998157 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:28 crc kubenswrapper[4812]: I0218 17:16:28.346669 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:28 crc kubenswrapper[4812]: I0218 17:16:28.403049 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.316725 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlhvd" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="registry-server" containerID="cri-o://596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1" gracePeriod=2 Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.808341 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.844797 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grn5\" (UniqueName: \"kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5\") pod \"1ff39385-22d9-476b-b209-5dff36050428\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.844851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content\") pod \"1ff39385-22d9-476b-b209-5dff36050428\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.844918 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities\") pod \"1ff39385-22d9-476b-b209-5dff36050428\" (UID: \"1ff39385-22d9-476b-b209-5dff36050428\") " Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.845830 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities" (OuterVolumeSpecName: "utilities") pod "1ff39385-22d9-476b-b209-5dff36050428" (UID: "1ff39385-22d9-476b-b209-5dff36050428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.860959 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5" (OuterVolumeSpecName: "kube-api-access-2grn5") pod "1ff39385-22d9-476b-b209-5dff36050428" (UID: "1ff39385-22d9-476b-b209-5dff36050428"). InnerVolumeSpecName "kube-api-access-2grn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.923758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff39385-22d9-476b-b209-5dff36050428" (UID: "1ff39385-22d9-476b-b209-5dff36050428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.947179 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grn5\" (UniqueName: \"kubernetes.io/projected/1ff39385-22d9-476b-b209-5dff36050428-kube-api-access-2grn5\") on node \"crc\" DevicePath \"\"" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.947222 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:16:30 crc kubenswrapper[4812]: I0218 17:16:30.947235 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff39385-22d9-476b-b209-5dff36050428-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.328266 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ff39385-22d9-476b-b209-5dff36050428" containerID="596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1" exitCode=0 Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.328341 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlhvd" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.328364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerDied","Data":"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1"} Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.329392 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlhvd" event={"ID":"1ff39385-22d9-476b-b209-5dff36050428","Type":"ContainerDied","Data":"f2044b644d83fcfb4a66677cb43cf9693f1f03cb33a483e147f96266f34de3d1"} Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.329415 4812 scope.go:117] "RemoveContainer" containerID="596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.374328 4812 scope.go:117] "RemoveContainer" containerID="02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.382768 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.408278 4812 scope.go:117] "RemoveContainer" containerID="b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.412453 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlhvd"] Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.447909 4812 scope.go:117] "RemoveContainer" containerID="596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1" Feb 18 17:16:31 crc kubenswrapper[4812]: E0218 17:16:31.448344 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1\": container with ID starting with 596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1 not found: ID does not exist" containerID="596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.448392 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1"} err="failed to get container status \"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1\": rpc error: code = NotFound desc = could not find container \"596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1\": container with ID starting with 596aa15232f4c15b656f6d9b38ba10da04542863245d608ef7b6bc52499116d1 not found: ID does not exist" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.448419 4812 scope.go:117] "RemoveContainer" containerID="02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e" Feb 18 17:16:31 crc kubenswrapper[4812]: E0218 17:16:31.448790 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e\": container with ID starting with 02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e not found: ID does not exist" containerID="02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.448858 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e"} err="failed to get container status \"02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e\": rpc error: code = NotFound desc = could not find container \"02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e\": container with ID starting with 02f26f30421d575d004b75d4a3b0b04a38cece3589e0768839c9b6e5f7ec3c7e not found: ID does not exist" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.448892 4812 scope.go:117] "RemoveContainer" containerID="b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b" Feb 18 17:16:31 crc kubenswrapper[4812]: E0218 17:16:31.449242 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b\": container with ID starting with b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b not found: ID does not exist" containerID="b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b" Feb 18 17:16:31 crc kubenswrapper[4812]: I0218 17:16:31.449273 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b"} err="failed to get container status \"b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b\": rpc error: code = NotFound desc = could not find container \"b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b\": container with ID starting with b58e4e109a39c020b8468ca76d67eebb932399509e1643fedc904b9ea2f0856b not found: ID does not exist" Feb 18 17:16:32 crc kubenswrapper[4812]: I0218 17:16:32.552534 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff39385-22d9-476b-b209-5dff36050428" path="/var/lib/kubelet/pods/1ff39385-22d9-476b-b209-5dff36050428/volumes" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.745197 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:16:54 crc kubenswrapper[4812]: E0218 17:16:54.746145 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="extract-utilities" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.746163 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="extract-utilities" Feb 18 17:16:54 crc kubenswrapper[4812]: E0218 17:16:54.746176 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="extract-content" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.746182 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="extract-content" Feb 18 17:16:54 crc kubenswrapper[4812]: E0218 17:16:54.746191 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="registry-server" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.746197 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="registry-server" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.746378 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff39385-22d9-476b-b209-5dff36050428" containerName="registry-server" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.747712 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.757572 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.933328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5xm\" (UniqueName: \"kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.933635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:54 crc kubenswrapper[4812]: I0218 17:16:54.933741 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.043626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5xm\" (UniqueName: \"kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.043779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.043824 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.044725 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.044965 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.093154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5xm\" (UniqueName: \"kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm\") pod \"redhat-operators-6jghx\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.368341 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:16:55 crc kubenswrapper[4812]: I0218 17:16:55.847155 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:16:56 crc kubenswrapper[4812]: I0218 17:16:56.594393 4812 generic.go:334] "Generic (PLEG): container finished" podID="f5707717-1963-470d-9410-5529280bdf85" containerID="bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91" exitCode=0 Feb 18 17:16:56 crc kubenswrapper[4812]: I0218 17:16:56.594452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerDied","Data":"bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91"} Feb 18 17:16:56 crc kubenswrapper[4812]: I0218 17:16:56.594707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerStarted","Data":"f19e34e809a0fc780c92674995229a0d28a78ab0589c6c72a45def43a7a30ff9"} Feb 18 17:16:57 crc kubenswrapper[4812]: I0218 17:16:57.606801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerStarted","Data":"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6"} Feb 18 17:17:05 crc kubenswrapper[4812]: I0218 17:17:05.687701 4812 generic.go:334] "Generic (PLEG): container finished" podID="f5707717-1963-470d-9410-5529280bdf85" containerID="37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6" exitCode=0 Feb 18 17:17:05 crc kubenswrapper[4812]: I0218 17:17:05.687825 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerDied","Data":"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6"} Feb 18 17:17:06 crc kubenswrapper[4812]: I0218 17:17:06.709926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerStarted","Data":"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82"} Feb 18 17:17:06 crc kubenswrapper[4812]: I0218 17:17:06.742633 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jghx" podStartSLOduration=3.092079182 podStartE2EDuration="12.742606216s" podCreationTimestamp="2026-02-18 17:16:54 +0000 UTC" firstStartedPulling="2026-02-18 17:16:56.595978073 +0000 UTC m=+2836.861588982" lastFinishedPulling="2026-02-18 17:17:06.246505107 +0000 UTC m=+2846.512116016" observedRunningTime="2026-02-18 17:17:06.732046133 +0000 UTC m=+2846.997657062" watchObservedRunningTime="2026-02-18 17:17:06.742606216 +0000 UTC m=+2847.008217125" Feb 18 17:17:15 crc kubenswrapper[4812]: I0218 17:17:15.369375 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:15 crc kubenswrapper[4812]: I0218 17:17:15.369935 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:15 crc kubenswrapper[4812]: I0218 17:17:15.506579 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:15 crc kubenswrapper[4812]: I0218 17:17:15.855379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:15 crc kubenswrapper[4812]: I0218 17:17:15.912627 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:17:17 crc kubenswrapper[4812]: I0218 17:17:17.823995 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6jghx" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="registry-server" containerID="cri-o://2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82" gracePeriod=2 Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.297899 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.443180 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities\") pod \"f5707717-1963-470d-9410-5529280bdf85\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.443262 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv5xm\" (UniqueName: \"kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm\") pod \"f5707717-1963-470d-9410-5529280bdf85\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.443343 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content\") pod \"f5707717-1963-470d-9410-5529280bdf85\" (UID: \"f5707717-1963-470d-9410-5529280bdf85\") " Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.445199 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities" (OuterVolumeSpecName: "utilities") pod "f5707717-1963-470d-9410-5529280bdf85" (UID: "f5707717-1963-470d-9410-5529280bdf85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.449837 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm" (OuterVolumeSpecName: "kube-api-access-qv5xm") pod "f5707717-1963-470d-9410-5529280bdf85" (UID: "f5707717-1963-470d-9410-5529280bdf85"). InnerVolumeSpecName "kube-api-access-qv5xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.545269 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.545546 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv5xm\" (UniqueName: \"kubernetes.io/projected/f5707717-1963-470d-9410-5529280bdf85-kube-api-access-qv5xm\") on node \"crc\" DevicePath \"\"" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.569262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5707717-1963-470d-9410-5529280bdf85" (UID: "f5707717-1963-470d-9410-5529280bdf85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.647139 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5707717-1963-470d-9410-5529280bdf85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.836261 4812 generic.go:334] "Generic (PLEG): container finished" podID="f5707717-1963-470d-9410-5529280bdf85" containerID="2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82" exitCode=0 Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.836324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jghx" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.836347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerDied","Data":"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82"} Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.836440 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jghx" event={"ID":"f5707717-1963-470d-9410-5529280bdf85","Type":"ContainerDied","Data":"f19e34e809a0fc780c92674995229a0d28a78ab0589c6c72a45def43a7a30ff9"} Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.836486 4812 scope.go:117] "RemoveContainer" containerID="2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.871216 4812 scope.go:117] "RemoveContainer" containerID="37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.885866 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.900746 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6jghx"] Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.912779 4812 scope.go:117] "RemoveContainer" containerID="bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.951663 4812 scope.go:117] "RemoveContainer" containerID="2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82" Feb 18 17:17:18 crc kubenswrapper[4812]: E0218 17:17:18.953481 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82\": container with ID starting with 2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82 not found: ID does not exist" containerID="2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.953577 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82"} err="failed to get container status \"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82\": rpc error: code = NotFound desc = could not find container \"2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82\": container with ID starting with 2f787466ccc7af84759ae4bfa1f057c57b26664c131d508ee499331ae5423b82 not found: ID does not exist" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.953614 4812 scope.go:117] "RemoveContainer" containerID="37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6" Feb 18 17:17:18 crc kubenswrapper[4812]: E0218 17:17:18.954514 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6\": container with ID starting with 37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6 not found: ID does not exist" containerID="37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.954563 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6"} err="failed to get container status \"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6\": rpc error: code = NotFound desc = could not find container \"37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6\": container with ID starting with 37caa0f6555bd77731002d559d9bbd8df6e4cc19eca9ab2db0fa4ccd6a830db6 not found: ID does not exist" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.954594 4812 scope.go:117] "RemoveContainer" containerID="bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91" Feb 18 17:17:18 crc kubenswrapper[4812]: E0218 17:17:18.954911 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91\": container with ID starting with bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91 not found: ID does not exist" containerID="bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91" Feb 18 17:17:18 crc kubenswrapper[4812]: I0218 17:17:18.954945 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91"} err="failed to get container status \"bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91\": rpc error: code = NotFound desc = could not find container \"bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91\": container with ID starting with bae9e806ff2086e25af5ee11a3dfdff4b25b83937abcb7d2d07dc4bd089fbd91 not found: ID does not exist" Feb 18 17:17:20 crc kubenswrapper[4812]: I0218 17:17:20.532450 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5707717-1963-470d-9410-5529280bdf85" path="/var/lib/kubelet/pods/f5707717-1963-470d-9410-5529280bdf85/volumes" Feb 18 17:17:33 crc kubenswrapper[4812]: I0218 17:17:33.413484 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:17:33 crc kubenswrapper[4812]: I0218 17:17:33.414082 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:18:03 crc kubenswrapper[4812]: I0218 17:18:03.414212 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:18:03 crc kubenswrapper[4812]: I0218 17:18:03.414994 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.666722 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:13 crc kubenswrapper[4812]: E0218 17:18:13.667998 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="registry-server" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.668018 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="registry-server" Feb 18 17:18:13 crc kubenswrapper[4812]: E0218 17:18:13.668030 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="extract-content" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.668038 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="extract-content" Feb 18 17:18:13 crc kubenswrapper[4812]: E0218 17:18:13.668062 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="extract-utilities" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.668070 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="extract-utilities" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.668359 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5707717-1963-470d-9410-5529280bdf85" containerName="registry-server" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.672015 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.678510 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.788204 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.788546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.788763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tlw\" (UniqueName: \"kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.890592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.890685 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tlw\" (UniqueName: \"kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.890827 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.891414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.891618 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.927929 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tlw\" (UniqueName: \"kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw\") pod \"redhat-marketplace-4xsct\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:13 crc kubenswrapper[4812]: I0218 17:18:13.992880 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:14 crc kubenswrapper[4812]: I0218 17:18:14.543231 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:15 crc kubenswrapper[4812]: I0218 17:18:15.363451 4812 generic.go:334] "Generic (PLEG): container finished" podID="26953756-5b4b-4058-b123-ee8207d04cbc" containerID="60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9" exitCode=0 Feb 18 17:18:15 crc kubenswrapper[4812]: I0218 17:18:15.363530 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerDied","Data":"60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9"} Feb 18 17:18:15 crc kubenswrapper[4812]: I0218 17:18:15.363801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerStarted","Data":"e0380a789f38a1c1f03791fc72405269cfa2282775174863d9bb66d89402fa6c"} Feb 18 17:18:17 crc kubenswrapper[4812]: I0218 17:18:17.393412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerStarted","Data":"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4"} Feb 18 17:18:19 crc kubenswrapper[4812]: I0218 17:18:19.410838 4812 generic.go:334] "Generic (PLEG): container finished" podID="26953756-5b4b-4058-b123-ee8207d04cbc" containerID="a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4" exitCode=0 Feb 18 17:18:19 crc kubenswrapper[4812]: I0218 17:18:19.410951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerDied","Data":"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4"} Feb 18 17:18:20 crc kubenswrapper[4812]: I0218 17:18:20.421268 4812 generic.go:334] "Generic (PLEG): container finished" podID="c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" containerID="4fa269c6972991009c2881a8cc97208ebeb1633129bd057b6a2ce68a7327d97f" exitCode=0 Feb 18 17:18:20 crc kubenswrapper[4812]: I0218 17:18:20.421363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" event={"ID":"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0","Type":"ContainerDied","Data":"4fa269c6972991009c2881a8cc97208ebeb1633129bd057b6a2ce68a7327d97f"} Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.434116 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerStarted","Data":"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224"} Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.455843 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xsct" podStartSLOduration=3.317290066 podStartE2EDuration="8.455819817s" podCreationTimestamp="2026-02-18 17:18:13 +0000 UTC" firstStartedPulling="2026-02-18 17:18:15.366467941 +0000 UTC m=+2915.632078850" lastFinishedPulling="2026-02-18 17:18:20.504997692 +0000 UTC m=+2920.770608601" observedRunningTime="2026-02-18 17:18:21.454237578 +0000 UTC m=+2921.719848497" watchObservedRunningTime="2026-02-18 17:18:21.455819817 +0000 UTC m=+2921.721430716" Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.875961 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.951269 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.951318 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.951371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmlmx\" (UniqueName: \"kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952151 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952215 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952270 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952309 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952487 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952562 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.952629 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0\") pod \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\" (UID: \"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0\") " Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.968505 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx" (OuterVolumeSpecName: "kube-api-access-nmlmx") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "kube-api-access-nmlmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.998087 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:21 crc kubenswrapper[4812]: I0218 17:18:21.999708 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.000039 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.017074 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.018288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.028345 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.032979 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.033079 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory" (OuterVolumeSpecName: "inventory") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.042434 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054205 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054236 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054245 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054253 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054261 4812 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054270 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054279 4812 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054287 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054296 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.054304 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmlmx\" (UniqueName: \"kubernetes.io/projected/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-kube-api-access-nmlmx\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.065058 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" (UID: "c89bb32f-2416-4ee5-82b2-d0378c8cd0c0"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.155886 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/c89bb32f-2416-4ee5-82b2-d0378c8cd0c0-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.444478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" event={"ID":"c89bb32f-2416-4ee5-82b2-d0378c8cd0c0","Type":"ContainerDied","Data":"04690a1642b83eb5764743715a2fc33271f421848e301ab39009c1ba40958650"} Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.444522 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04690a1642b83eb5764743715a2fc33271f421848e301ab39009c1ba40958650" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.444538 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gr8qk" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.536357 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4"] Feb 18 17:18:22 crc kubenswrapper[4812]: E0218 17:18:22.536816 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.536840 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.537136 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89bb32f-2416-4ee5-82b2-d0378c8cd0c0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.537886 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.540598 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.541322 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.541380 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-222zk" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.541561 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.541661 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.545016 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4"] Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpnb\" (UniqueName: \"kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697894 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697935 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.697994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.698020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800338 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kpnb\" (UniqueName: \"kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.800581 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.806415 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.806416 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.806413 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.806639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.806847 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.810038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.819021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kpnb\" (UniqueName: \"kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:22 crc kubenswrapper[4812]: I0218 17:18:22.883170 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:18:23 crc kubenswrapper[4812]: I0218 17:18:23.482779 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4"] Feb 18 17:18:23 crc kubenswrapper[4812]: I0218 17:18:23.993910 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:23 crc kubenswrapper[4812]: I0218 17:18:23.994481 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:24 crc kubenswrapper[4812]: I0218 17:18:24.087379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:24 crc kubenswrapper[4812]: I0218 17:18:24.464436 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" event={"ID":"7430437b-aab4-42f1-be95-3b98539e570f","Type":"ContainerStarted","Data":"0f19bf687d2f73afce5cbf6c3d0aa38a7d5d378a9ab187318b62ef787f0b386c"} Feb 18 17:18:25 crc kubenswrapper[4812]: I0218 17:18:25.480895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" event={"ID":"7430437b-aab4-42f1-be95-3b98539e570f","Type":"ContainerStarted","Data":"32b15f81c08a1b7cd6647a139dc07c133dff78b45b0023f1df009cfe2e3aa6bc"} Feb 18 17:18:25 crc kubenswrapper[4812]: I0218 17:18:25.497076 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" podStartSLOduration=1.948999337 podStartE2EDuration="3.497053235s" podCreationTimestamp="2026-02-18 17:18:22 +0000 UTC" firstStartedPulling="2026-02-18 17:18:23.537967281 +0000 UTC m=+2923.803578190" lastFinishedPulling="2026-02-18 17:18:25.086021179 +0000 UTC m=+2925.351632088" observedRunningTime="2026-02-18 17:18:25.494788859 +0000 UTC m=+2925.760399778" watchObservedRunningTime="2026-02-18 17:18:25.497053235 +0000 UTC m=+2925.762664144" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.413363 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.413986 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.414049 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.414976 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.415043 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" gracePeriod=600 Feb 18 17:18:33 crc kubenswrapper[4812]: E0218 17:18:33.546946 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.566375 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" exitCode=0 Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.566423 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086"} Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.566456 4812 scope.go:117] "RemoveContainer" containerID="e2cbea9a7c3496859502aa4ef694e242df9ca9fad1f02d7270f1f99490cc2776" Feb 18 17:18:33 crc kubenswrapper[4812]: I0218 17:18:33.567132 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:18:33 crc kubenswrapper[4812]: E0218 17:18:33.567429 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:18:34 crc kubenswrapper[4812]: I0218 17:18:34.076541 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:34 crc kubenswrapper[4812]: I0218 17:18:34.142139 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:34 crc kubenswrapper[4812]: I0218 17:18:34.578203 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xsct" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="registry-server" containerID="cri-o://bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224" gracePeriod=2 Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.137801 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.268844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tlw\" (UniqueName: \"kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw\") pod \"26953756-5b4b-4058-b123-ee8207d04cbc\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.268954 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content\") pod \"26953756-5b4b-4058-b123-ee8207d04cbc\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.269055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities\") pod \"26953756-5b4b-4058-b123-ee8207d04cbc\" (UID: \"26953756-5b4b-4058-b123-ee8207d04cbc\") " Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.269770 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities" (OuterVolumeSpecName: "utilities") pod "26953756-5b4b-4058-b123-ee8207d04cbc" (UID: "26953756-5b4b-4058-b123-ee8207d04cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.274424 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw" (OuterVolumeSpecName: "kube-api-access-r4tlw") pod "26953756-5b4b-4058-b123-ee8207d04cbc" (UID: "26953756-5b4b-4058-b123-ee8207d04cbc"). InnerVolumeSpecName "kube-api-access-r4tlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.294841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26953756-5b4b-4058-b123-ee8207d04cbc" (UID: "26953756-5b4b-4058-b123-ee8207d04cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.371345 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tlw\" (UniqueName: \"kubernetes.io/projected/26953756-5b4b-4058-b123-ee8207d04cbc-kube-api-access-r4tlw\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.371380 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.371395 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26953756-5b4b-4058-b123-ee8207d04cbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.590393 4812 generic.go:334] "Generic (PLEG): container finished" podID="26953756-5b4b-4058-b123-ee8207d04cbc" containerID="bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224" exitCode=0 Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.590448 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerDied","Data":"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224"} Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.590461 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xsct" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.590484 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xsct" event={"ID":"26953756-5b4b-4058-b123-ee8207d04cbc","Type":"ContainerDied","Data":"e0380a789f38a1c1f03791fc72405269cfa2282775174863d9bb66d89402fa6c"} Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.590502 4812 scope.go:117] "RemoveContainer" containerID="bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.622884 4812 scope.go:117] "RemoveContainer" containerID="a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.627985 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.637712 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xsct"] Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.656428 4812 scope.go:117] "RemoveContainer" containerID="60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.701052 4812 scope.go:117] "RemoveContainer" containerID="bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224" Feb 18 17:18:35 crc kubenswrapper[4812]: E0218 17:18:35.701741 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224\": container with ID starting with bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224 not found: ID does not exist" containerID="bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.701781 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224"} err="failed to get container status \"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224\": rpc error: code = NotFound desc = could not find container \"bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224\": container with ID starting with bfafa65bad113655da737881ad12d0bcbe4f768e545bf58a8ba290ed6d106224 not found: ID does not exist" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.701809 4812 scope.go:117] "RemoveContainer" containerID="a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4" Feb 18 17:18:35 crc kubenswrapper[4812]: E0218 17:18:35.702077 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4\": container with ID starting with a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4 not found: ID does not exist" containerID="a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.702147 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4"} err="failed to get container status \"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4\": rpc error: code = NotFound desc = could not find container \"a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4\": container with ID starting with a4c45c7996101c330146637d3921941243f56526c2ef6718a21345154454b0c4 not found: ID does not exist" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.702179 4812 scope.go:117] "RemoveContainer" containerID="60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9" Feb 18 17:18:35 crc kubenswrapper[4812]: E0218 17:18:35.702460 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9\": container with ID starting with 60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9 not found: ID does not exist" containerID="60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9" Feb 18 17:18:35 crc kubenswrapper[4812]: I0218 17:18:35.702509 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9"} err="failed to get container status \"60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9\": rpc error: code = NotFound desc = could not find container \"60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9\": container with ID starting with 60ca9011e67bad3721c8dec927d842f384b9ac42368a83e52539e98162f38db9 not found: ID does not exist" Feb 18 17:18:36 crc kubenswrapper[4812]: I0218 17:18:36.521745 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" path="/var/lib/kubelet/pods/26953756-5b4b-4058-b123-ee8207d04cbc/volumes" Feb 18 17:18:48 crc kubenswrapper[4812]: I0218 17:18:48.508642 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:18:48 crc kubenswrapper[4812]: E0218 17:18:48.509408 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:18:59 crc kubenswrapper[4812]: I0218 17:18:59.509013 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:18:59 crc kubenswrapper[4812]: E0218 17:18:59.509802 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:19:12 crc kubenswrapper[4812]: I0218 17:19:12.508412 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:19:12 crc kubenswrapper[4812]: E0218 17:19:12.509202 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:19:25 crc kubenswrapper[4812]: I0218 17:19:25.508868 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:19:25 crc kubenswrapper[4812]: E0218 17:19:25.510004 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:19:40 crc kubenswrapper[4812]: I0218 17:19:40.516113 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:19:40 crc kubenswrapper[4812]: E0218 17:19:40.518761 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:19:55 crc kubenswrapper[4812]: I0218 17:19:55.508662 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:19:55 crc kubenswrapper[4812]: E0218 17:19:55.509353 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:20:08 crc kubenswrapper[4812]: I0218 17:20:08.509132 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:20:08 crc kubenswrapper[4812]: E0218 17:20:08.509964 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:20:19 crc kubenswrapper[4812]: I0218 17:20:19.615313 4812 generic.go:334] "Generic (PLEG): container finished" podID="7430437b-aab4-42f1-be95-3b98539e570f" containerID="32b15f81c08a1b7cd6647a139dc07c133dff78b45b0023f1df009cfe2e3aa6bc" exitCode=0 Feb 18 17:20:19 crc kubenswrapper[4812]: I0218 17:20:19.615429 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" event={"ID":"7430437b-aab4-42f1-be95-3b98539e570f","Type":"ContainerDied","Data":"32b15f81c08a1b7cd6647a139dc07c133dff78b45b0023f1df009cfe2e3aa6bc"} Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.064120 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169580 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169713 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169748 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kpnb\" (UniqueName: \"kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169789 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.169868 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1\") pod \"7430437b-aab4-42f1-be95-3b98539e570f\" (UID: \"7430437b-aab4-42f1-be95-3b98539e570f\") " Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.177511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb" (OuterVolumeSpecName: "kube-api-access-8kpnb") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "kube-api-access-8kpnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.191375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.199325 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.202277 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory" (OuterVolumeSpecName: "inventory") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.211332 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.212579 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.213254 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7430437b-aab4-42f1-be95-3b98539e570f" (UID: "7430437b-aab4-42f1-be95-3b98539e570f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272648 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272696 4812 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272710 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272722 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kpnb\" (UniqueName: \"kubernetes.io/projected/7430437b-aab4-42f1-be95-3b98539e570f-kube-api-access-8kpnb\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272733 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272749 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.272764 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7430437b-aab4-42f1-be95-3b98539e570f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.636944 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" event={"ID":"7430437b-aab4-42f1-be95-3b98539e570f","Type":"ContainerDied","Data":"0f19bf687d2f73afce5cbf6c3d0aa38a7d5d378a9ab187318b62ef787f0b386c"} Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.637006 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f19bf687d2f73afce5cbf6c3d0aa38a7d5d378a9ab187318b62ef787f0b386c" Feb 18 17:20:21 crc kubenswrapper[4812]: I0218 17:20:21.637033 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4" Feb 18 17:20:22 crc kubenswrapper[4812]: I0218 17:20:22.508299 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:20:22 crc kubenswrapper[4812]: E0218 17:20:22.508697 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:20:34 crc kubenswrapper[4812]: I0218 17:20:34.509523 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:20:34 crc kubenswrapper[4812]: E0218 17:20:34.510944 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:20:47 crc kubenswrapper[4812]: I0218 17:20:47.508727 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:20:47 crc kubenswrapper[4812]: E0218 17:20:47.509856 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:21:00 crc kubenswrapper[4812]: I0218 17:21:00.516168 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:21:00 crc kubenswrapper[4812]: E0218 17:21:00.517879 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:21:07 crc kubenswrapper[4812]: I0218 17:21:07.604198 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:07 crc kubenswrapper[4812]: I0218 17:21:07.605088 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="prometheus" containerID="cri-o://06d95caf9f6d500c8886b19094b812db611a76a5ba7bb16c54c8e30e2e6d4a56" gracePeriod=600 Feb 18 17:21:07 crc kubenswrapper[4812]: I0218 17:21:07.605149 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="config-reloader" containerID="cri-o://23aa12d5860605c219f23fb083d2f672cb24343c4d3d1e21628269e100289196" gracePeriod=600 Feb 18 17:21:07 crc kubenswrapper[4812]: I0218 17:21:07.605195 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="thanos-sidecar" containerID="cri-o://75c6fc662a478ed149214543f8e4354228d2b3af3a0e3049014b4b44f14ed00b" gracePeriod=600 Feb 18 17:21:07 crc kubenswrapper[4812]: I0218 17:21:07.690576 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.144:9090/-/ready\": read tcp 10.217.0.2:56214->10.217.0.144:9090: read: connection reset by peer" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140526 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerID="75c6fc662a478ed149214543f8e4354228d2b3af3a0e3049014b4b44f14ed00b" exitCode=0 Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140558 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerID="23aa12d5860605c219f23fb083d2f672cb24343c4d3d1e21628269e100289196" exitCode=0 Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140567 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerID="06d95caf9f6d500c8886b19094b812db611a76a5ba7bb16c54c8e30e2e6d4a56" exitCode=0 Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerDied","Data":"75c6fc662a478ed149214543f8e4354228d2b3af3a0e3049014b4b44f14ed00b"} Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerDied","Data":"23aa12d5860605c219f23fb083d2f672cb24343c4d3d1e21628269e100289196"} Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.140631 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerDied","Data":"06d95caf9f6d500c8886b19094b812db611a76a5ba7bb16c54c8e30e2e6d4a56"} Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.663258 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777635 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777693 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777738 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777788 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwnjm\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.777924 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.778295 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.778861 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.778919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.778949 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779018 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779274 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file\") pod \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\" (UID: \"f8d94f2a-b628-40a4-ad97-96c41ea2940a\") " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779478 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.779821 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.780250 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.780271 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.780284 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f8d94f2a-b628-40a4-ad97-96c41ea2940a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.784894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out" (OuterVolumeSpecName: "config-out") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.787185 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm" (OuterVolumeSpecName: "kube-api-access-gwnjm") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "kube-api-access-gwnjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.791285 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.799310 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.799410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.799652 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config" (OuterVolumeSpecName: "config") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.800485 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.801015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.826371 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.871369 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config" (OuterVolumeSpecName: "web-config") pod "f8d94f2a-b628-40a4-ad97-96c41ea2940a" (UID: "f8d94f2a-b628-40a4-ad97-96c41ea2940a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881786 4812 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881826 4812 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881842 4812 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881856 4812 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881871 4812 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881885 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-config\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881898 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwnjm\" (UniqueName: \"kubernetes.io/projected/f8d94f2a-b628-40a4-ad97-96c41ea2940a-kube-api-access-gwnjm\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881910 4812 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881925 4812 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f8d94f2a-b628-40a4-ad97-96c41ea2940a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.881973 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") on node \"crc\" " Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.926523 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.927470 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651") on node "crc" Feb 18 17:21:08 crc kubenswrapper[4812]: I0218 17:21:08.984464 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") on node \"crc\" DevicePath \"\"" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.154285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f8d94f2a-b628-40a4-ad97-96c41ea2940a","Type":"ContainerDied","Data":"e11c329bae96c7fcd447ac5a10e90ddb64399756f42353b61a4af80f173c6455"} Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.154351 4812 scope.go:117] "RemoveContainer" containerID="75c6fc662a478ed149214543f8e4354228d2b3af3a0e3049014b4b44f14ed00b" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.154548 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.190760 4812 scope.go:117] "RemoveContainer" containerID="23aa12d5860605c219f23fb083d2f672cb24343c4d3d1e21628269e100289196" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.196133 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.206085 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.213566 4812 scope.go:117] "RemoveContainer" containerID="06d95caf9f6d500c8886b19094b812db611a76a5ba7bb16c54c8e30e2e6d4a56" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.230660 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231066 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="config-reloader" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231084 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="config-reloader" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231119 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="prometheus" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231129 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="prometheus" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231155 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="registry-server" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231165 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="registry-server" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231190 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="init-config-reloader" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231198 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="init-config-reloader" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231217 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7430437b-aab4-42f1-be95-3b98539e570f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231228 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7430437b-aab4-42f1-be95-3b98539e570f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231318 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="thanos-sidecar" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231329 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="thanos-sidecar" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231340 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="extract-utilities" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231349 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="extract-utilities" Feb 18 17:21:09 crc kubenswrapper[4812]: E0218 17:21:09.231363 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="extract-content" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231384 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="extract-content" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231626 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7430437b-aab4-42f1-be95-3b98539e570f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231652 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="prometheus" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231663 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="thanos-sidecar" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231674 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="26953756-5b4b-4058-b123-ee8207d04cbc" containerName="registry-server" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.231687 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" containerName="config-reloader" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.234309 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.236582 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.236697 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.242412 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.242693 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.242910 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.243136 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.243367 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2nrwm" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.243654 4812 scope.go:117] "RemoveContainer" containerID="3bf962ec4f7eadb961e74c8ccbeb100afa6ad08bfe029a1f79c48d4164e95240" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.249034 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.264592 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqwk\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-kube-api-access-wsqwk\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290812 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290955 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.290983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.291010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.291039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.391911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392343 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392360 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392414 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392441 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392491 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392547 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqwk\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-kube-api-access-wsqwk\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.392978 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.393181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.397917 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.397938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.398051 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.398265 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.398671 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.398692 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af460646b9286704a29606a0b72ed4f0b878dd755da4447874f6899e9b871ead/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.403462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.405342 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.406573 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.408342 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.411459 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqwk\" (UniqueName: \"kubernetes.io/projected/f90b33eb-1f5b-4d69-8b0f-0798ac88e041-kube-api-access-wsqwk\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.437538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-decd2885-08ff-4dc4-b1a3-e043b5a74651\") pod \"prometheus-metric-storage-0\" (UID: \"f90b33eb-1f5b-4d69-8b0f-0798ac88e041\") " pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:09 crc kubenswrapper[4812]: I0218 17:21:09.556933 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:10 crc kubenswrapper[4812]: I0218 17:21:10.071295 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 17:21:10 crc kubenswrapper[4812]: W0218 17:21:10.077543 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf90b33eb_1f5b_4d69_8b0f_0798ac88e041.slice/crio-e1ff769458f2ea62a2bde1a1b7b87ba3285a53c5465fe54e21be7dc100f073b8 WatchSource:0}: Error finding container e1ff769458f2ea62a2bde1a1b7b87ba3285a53c5465fe54e21be7dc100f073b8: Status 404 returned error can't find the container with id e1ff769458f2ea62a2bde1a1b7b87ba3285a53c5465fe54e21be7dc100f073b8 Feb 18 17:21:10 crc kubenswrapper[4812]: I0218 17:21:10.169270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerStarted","Data":"e1ff769458f2ea62a2bde1a1b7b87ba3285a53c5465fe54e21be7dc100f073b8"} Feb 18 17:21:10 crc kubenswrapper[4812]: I0218 17:21:10.520499 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d94f2a-b628-40a4-ad97-96c41ea2940a" path="/var/lib/kubelet/pods/f8d94f2a-b628-40a4-ad97-96c41ea2940a/volumes" Feb 18 17:21:12 crc kubenswrapper[4812]: I0218 17:21:12.509683 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:21:12 crc kubenswrapper[4812]: E0218 17:21:12.510603 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:21:14 crc kubenswrapper[4812]: I0218 17:21:14.216237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerStarted","Data":"873d95540764942ecb7940810b5fc1ca2a0be6bfe62f9e830272af416f95f921"} Feb 18 17:21:21 crc kubenswrapper[4812]: I0218 17:21:21.285980 4812 generic.go:334] "Generic (PLEG): container finished" podID="f90b33eb-1f5b-4d69-8b0f-0798ac88e041" containerID="873d95540764942ecb7940810b5fc1ca2a0be6bfe62f9e830272af416f95f921" exitCode=0 Feb 18 17:21:21 crc kubenswrapper[4812]: I0218 17:21:21.286114 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerDied","Data":"873d95540764942ecb7940810b5fc1ca2a0be6bfe62f9e830272af416f95f921"} Feb 18 17:21:22 crc kubenswrapper[4812]: I0218 17:21:22.299900 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerStarted","Data":"9ce5a04920f1e4af1f0422c6a9dc9df86fe8b9c740ff283122ec0867d73d180c"} Feb 18 17:21:25 crc kubenswrapper[4812]: I0218 17:21:25.371080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerStarted","Data":"8e4ee36513d85a15f6593b62887c23c3f5a321c24216c697a9d97a1083a8e120"} Feb 18 17:21:25 crc kubenswrapper[4812]: I0218 17:21:25.371673 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f90b33eb-1f5b-4d69-8b0f-0798ac88e041","Type":"ContainerStarted","Data":"3edaf768c7812120f9b8ec082d78616b7e0b86f4e671d850bffeea1de635bc6a"} Feb 18 17:21:25 crc kubenswrapper[4812]: I0218 17:21:25.402551 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.402524413 podStartE2EDuration="16.402524413s" podCreationTimestamp="2026-02-18 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 17:21:25.396278307 +0000 UTC m=+3105.661889216" watchObservedRunningTime="2026-02-18 17:21:25.402524413 +0000 UTC m=+3105.668135342" Feb 18 17:21:25 crc kubenswrapper[4812]: I0218 17:21:25.508286 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:21:25 crc kubenswrapper[4812]: E0218 17:21:25.508958 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:21:29 crc kubenswrapper[4812]: I0218 17:21:29.557752 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:38 crc kubenswrapper[4812]: I0218 17:21:38.508823 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:21:38 crc kubenswrapper[4812]: E0218 17:21:38.509793 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:21:39 crc kubenswrapper[4812]: I0218 17:21:39.557291 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:39 crc kubenswrapper[4812]: I0218 17:21:39.563194 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:40 crc kubenswrapper[4812]: I0218 17:21:40.552317 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 17:21:53 crc kubenswrapper[4812]: I0218 17:21:53.508043 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:21:53 crc kubenswrapper[4812]: E0218 17:21:53.509005 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.816083 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.819032 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.821121 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.821178 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.823221 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sdp7q" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.823324 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.827118 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.955876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.955927 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956374 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956414 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956431 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pslpt\" (UniqueName: \"kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:03 crc kubenswrapper[4812]: I0218 17:22:03.956567 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.057872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.057967 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058039 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058069 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pslpt\" (UniqueName: \"kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058192 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058407 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058738 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.058938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.059720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.060754 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.068245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.069623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.074659 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.080718 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pslpt\" (UniqueName: \"kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.088999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.147008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.652713 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.657319 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:22:04 crc kubenswrapper[4812]: I0218 17:22:04.759165 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d55cc8b7-fd00-4b48-ae2c-458f83580502","Type":"ContainerStarted","Data":"17a5ddfd21c812a3ea38314caf7d8f4f154420f645987183c6338dba77834796"} Feb 18 17:22:07 crc kubenswrapper[4812]: I0218 17:22:07.509155 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:22:07 crc kubenswrapper[4812]: E0218 17:22:07.509811 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:22:18 crc kubenswrapper[4812]: I0218 17:22:18.902264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d55cc8b7-fd00-4b48-ae2c-458f83580502","Type":"ContainerStarted","Data":"396ab037eee11369c699fc2b8728410e2f423bd1cbe72a74f99caaeed28aeee7"} Feb 18 17:22:18 crc kubenswrapper[4812]: I0218 17:22:18.937917 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.842049614 podStartE2EDuration="16.937898023s" podCreationTimestamp="2026-02-18 17:22:02 +0000 UTC" firstStartedPulling="2026-02-18 17:22:04.657017916 +0000 UTC m=+3144.922628825" lastFinishedPulling="2026-02-18 17:22:17.752866325 +0000 UTC m=+3158.018477234" observedRunningTime="2026-02-18 17:22:18.93495936 +0000 UTC m=+3159.200570269" watchObservedRunningTime="2026-02-18 17:22:18.937898023 +0000 UTC m=+3159.203508932" Feb 18 17:22:22 crc kubenswrapper[4812]: I0218 17:22:22.507884 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:22:22 crc kubenswrapper[4812]: E0218 17:22:22.508763 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:22:36 crc kubenswrapper[4812]: I0218 17:22:36.508262 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:22:36 crc kubenswrapper[4812]: E0218 17:22:36.509238 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:22:49 crc kubenswrapper[4812]: I0218 17:22:49.508620 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:22:49 crc kubenswrapper[4812]: E0218 17:22:49.509542 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:23:04 crc kubenswrapper[4812]: I0218 17:23:04.508552 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:23:04 crc kubenswrapper[4812]: E0218 17:23:04.509513 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:23:16 crc kubenswrapper[4812]: I0218 17:23:16.509278 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:23:16 crc kubenswrapper[4812]: E0218 17:23:16.509999 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:23:29 crc kubenswrapper[4812]: I0218 17:23:29.508566 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:23:29 crc kubenswrapper[4812]: E0218 17:23:29.509550 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:23:44 crc kubenswrapper[4812]: I0218 17:23:44.188086 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-db576bcfc-pcjbk" podUID="b814aa4e-5f04-4919-bfb3-153dd88e6ef8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 17:23:44 crc kubenswrapper[4812]: I0218 17:23:44.508636 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:23:45 crc kubenswrapper[4812]: I0218 17:23:45.300738 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702"} Feb 18 17:26:03 crc kubenswrapper[4812]: I0218 17:26:03.413831 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:26:03 crc kubenswrapper[4812]: I0218 17:26:03.414385 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.466981 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.469497 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.482837 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.532745 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.533377 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8p7q\" (UniqueName: \"kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.533584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.635717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.636224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.636376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8p7q\" (UniqueName: \"kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.636399 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.636572 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.667703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8p7q\" (UniqueName: \"kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q\") pod \"community-operators-h7t8p\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:21 crc kubenswrapper[4812]: I0218 17:26:21.791062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:22 crc kubenswrapper[4812]: I0218 17:26:22.370741 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:23 crc kubenswrapper[4812]: I0218 17:26:23.363017 4812 generic.go:334] "Generic (PLEG): container finished" podID="bebe0697-1728-4a0e-8931-277dacc24235" containerID="087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5" exitCode=0 Feb 18 17:26:23 crc kubenswrapper[4812]: I0218 17:26:23.363248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerDied","Data":"087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5"} Feb 18 17:26:23 crc kubenswrapper[4812]: I0218 17:26:23.363554 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerStarted","Data":"b4f1581b7ec8a3e1aa5cb1cd49dba937009c28b9baf21d03f08a506cfb18ada9"} Feb 18 17:26:24 crc kubenswrapper[4812]: I0218 17:26:24.379067 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerStarted","Data":"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb"} Feb 18 17:26:25 crc kubenswrapper[4812]: I0218 17:26:25.392803 4812 generic.go:334] "Generic (PLEG): container finished" podID="bebe0697-1728-4a0e-8931-277dacc24235" containerID="0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb" exitCode=0 Feb 18 17:26:25 crc kubenswrapper[4812]: I0218 17:26:25.392846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerDied","Data":"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb"} Feb 18 17:26:26 crc kubenswrapper[4812]: I0218 17:26:26.410006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerStarted","Data":"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb"} Feb 18 17:26:26 crc kubenswrapper[4812]: I0218 17:26:26.434021 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7t8p" podStartSLOduration=3.011740817 podStartE2EDuration="5.433996678s" podCreationTimestamp="2026-02-18 17:26:21 +0000 UTC" firstStartedPulling="2026-02-18 17:26:23.365092432 +0000 UTC m=+3403.630703341" lastFinishedPulling="2026-02-18 17:26:25.787348293 +0000 UTC m=+3406.052959202" observedRunningTime="2026-02-18 17:26:26.431203728 +0000 UTC m=+3406.696814637" watchObservedRunningTime="2026-02-18 17:26:26.433996678 +0000 UTC m=+3406.699607587" Feb 18 17:26:28 crc kubenswrapper[4812]: I0218 17:26:28.851495 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:28 crc kubenswrapper[4812]: I0218 17:26:28.854522 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:28 crc kubenswrapper[4812]: I0218 17:26:28.863009 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.001048 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.001425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.001560 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsr2q\" (UniqueName: \"kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.103209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.103277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsr2q\" (UniqueName: \"kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.103371 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.104125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.104145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.128444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsr2q\" (UniqueName: \"kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q\") pod \"certified-operators-lr7sb\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.182452 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:29 crc kubenswrapper[4812]: I0218 17:26:29.694035 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:30 crc kubenswrapper[4812]: I0218 17:26:30.445403 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerID="42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29" exitCode=0 Feb 18 17:26:30 crc kubenswrapper[4812]: I0218 17:26:30.445483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerDied","Data":"42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29"} Feb 18 17:26:30 crc kubenswrapper[4812]: I0218 17:26:30.445519 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerStarted","Data":"85b30892063641bfb3055f226fd90fca12e0c89d1bb68820a8039a7f97472d8a"} Feb 18 17:26:31 crc kubenswrapper[4812]: I0218 17:26:31.792088 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:31 crc kubenswrapper[4812]: I0218 17:26:31.792680 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:31 crc kubenswrapper[4812]: I0218 17:26:31.855383 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:32 crc kubenswrapper[4812]: I0218 17:26:32.461842 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerStarted","Data":"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700"} Feb 18 17:26:32 crc kubenswrapper[4812]: I0218 17:26:32.528435 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:33 crc kubenswrapper[4812]: I0218 17:26:33.414006 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:26:33 crc kubenswrapper[4812]: I0218 17:26:33.414076 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:26:33 crc kubenswrapper[4812]: I0218 17:26:33.442338 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:34 crc kubenswrapper[4812]: I0218 17:26:34.486614 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerID="8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700" exitCode=0 Feb 18 17:26:34 crc kubenswrapper[4812]: I0218 17:26:34.487424 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7t8p" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="registry-server" containerID="cri-o://7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb" gracePeriod=2 Feb 18 17:26:34 crc kubenswrapper[4812]: I0218 17:26:34.486753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerDied","Data":"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700"} Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.121813 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.271468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8p7q\" (UniqueName: \"kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q\") pod \"bebe0697-1728-4a0e-8931-277dacc24235\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.271568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content\") pod \"bebe0697-1728-4a0e-8931-277dacc24235\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.271765 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities\") pod \"bebe0697-1728-4a0e-8931-277dacc24235\" (UID: \"bebe0697-1728-4a0e-8931-277dacc24235\") " Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.272422 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities" (OuterVolumeSpecName: "utilities") pod "bebe0697-1728-4a0e-8931-277dacc24235" (UID: "bebe0697-1728-4a0e-8931-277dacc24235"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.283909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q" (OuterVolumeSpecName: "kube-api-access-j8p7q") pod "bebe0697-1728-4a0e-8931-277dacc24235" (UID: "bebe0697-1728-4a0e-8931-277dacc24235"). InnerVolumeSpecName "kube-api-access-j8p7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.322268 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebe0697-1728-4a0e-8931-277dacc24235" (UID: "bebe0697-1728-4a0e-8931-277dacc24235"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.374571 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8p7q\" (UniqueName: \"kubernetes.io/projected/bebe0697-1728-4a0e-8931-277dacc24235-kube-api-access-j8p7q\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.374610 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.374624 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe0697-1728-4a0e-8931-277dacc24235-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.498559 4812 generic.go:334] "Generic (PLEG): container finished" podID="bebe0697-1728-4a0e-8931-277dacc24235" containerID="7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb" exitCode=0 Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.498865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerDied","Data":"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb"} Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.498896 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7t8p" event={"ID":"bebe0697-1728-4a0e-8931-277dacc24235","Type":"ContainerDied","Data":"b4f1581b7ec8a3e1aa5cb1cd49dba937009c28b9baf21d03f08a506cfb18ada9"} Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.498912 4812 scope.go:117] "RemoveContainer" containerID="7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.499028 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7t8p" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.509129 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerStarted","Data":"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a"} Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.523792 4812 scope.go:117] "RemoveContainer" containerID="0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.533474 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr7sb" podStartSLOduration=3.032139652 podStartE2EDuration="7.533455953s" podCreationTimestamp="2026-02-18 17:26:28 +0000 UTC" firstStartedPulling="2026-02-18 17:26:30.447477854 +0000 UTC m=+3410.713088763" lastFinishedPulling="2026-02-18 17:26:34.948794155 +0000 UTC m=+3415.214405064" observedRunningTime="2026-02-18 17:26:35.531932525 +0000 UTC m=+3415.797543434" watchObservedRunningTime="2026-02-18 17:26:35.533455953 +0000 UTC m=+3415.799066852" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.552347 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.555256 4812 scope.go:117] "RemoveContainer" containerID="087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.561940 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7t8p"] Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.572532 4812 scope.go:117] "RemoveContainer" containerID="7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb" Feb 18 17:26:35 crc kubenswrapper[4812]: E0218 17:26:35.573050 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb\": container with ID starting with 7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb not found: ID does not exist" containerID="7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.573135 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb"} err="failed to get container status \"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb\": rpc error: code = NotFound desc = could not find container \"7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb\": container with ID starting with 7365de930cc1cffe2d53cf7183d0ed0f4bb1223540a66704addba55a2fdcdfbb not found: ID does not exist" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.573163 4812 scope.go:117] "RemoveContainer" containerID="0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb" Feb 18 17:26:35 crc kubenswrapper[4812]: E0218 17:26:35.573533 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb\": container with ID starting with 0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb not found: ID does not exist" containerID="0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.573555 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb"} err="failed to get container status \"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb\": rpc error: code = NotFound desc = could not find container \"0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb\": container with ID starting with 0e3a0fc8b6028d8fa6cf5e0b557adfb21f36969bb69da516993375548089adfb not found: ID does not exist" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.573572 4812 scope.go:117] "RemoveContainer" containerID="087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5" Feb 18 17:26:35 crc kubenswrapper[4812]: E0218 17:26:35.573838 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5\": container with ID starting with 087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5 not found: ID does not exist" containerID="087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5" Feb 18 17:26:35 crc kubenswrapper[4812]: I0218 17:26:35.573858 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5"} err="failed to get container status \"087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5\": rpc error: code = NotFound desc = could not find container \"087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5\": container with ID starting with 087fa31e443852725f517c90606d28a7bf6a8914fefbbc6465c15047d91117e5 not found: ID does not exist" Feb 18 17:26:36 crc kubenswrapper[4812]: I0218 17:26:36.522486 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebe0697-1728-4a0e-8931-277dacc24235" path="/var/lib/kubelet/pods/bebe0697-1728-4a0e-8931-277dacc24235/volumes" Feb 18 17:26:39 crc kubenswrapper[4812]: I0218 17:26:39.182666 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:39 crc kubenswrapper[4812]: I0218 17:26:39.183206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:40 crc kubenswrapper[4812]: I0218 17:26:40.227807 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lr7sb" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:26:40 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:26:40 crc kubenswrapper[4812]: > Feb 18 17:26:49 crc kubenswrapper[4812]: I0218 17:26:49.237978 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:49 crc kubenswrapper[4812]: I0218 17:26:49.301351 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:49 crc kubenswrapper[4812]: I0218 17:26:49.474245 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:50 crc kubenswrapper[4812]: I0218 17:26:50.646487 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lr7sb" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="registry-server" containerID="cri-o://e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a" gracePeriod=2 Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.149525 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.295839 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content\") pod \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.296166 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities\") pod \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.296261 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsr2q\" (UniqueName: \"kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q\") pod \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\" (UID: \"d8f15d14-db12-4d91-af7b-d1b28395ec0d\") " Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.297007 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities" (OuterVolumeSpecName: "utilities") pod "d8f15d14-db12-4d91-af7b-d1b28395ec0d" (UID: "d8f15d14-db12-4d91-af7b-d1b28395ec0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.301196 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q" (OuterVolumeSpecName: "kube-api-access-rsr2q") pod "d8f15d14-db12-4d91-af7b-d1b28395ec0d" (UID: "d8f15d14-db12-4d91-af7b-d1b28395ec0d"). InnerVolumeSpecName "kube-api-access-rsr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.352904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8f15d14-db12-4d91-af7b-d1b28395ec0d" (UID: "d8f15d14-db12-4d91-af7b-d1b28395ec0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.398441 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.398491 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsr2q\" (UniqueName: \"kubernetes.io/projected/d8f15d14-db12-4d91-af7b-d1b28395ec0d-kube-api-access-rsr2q\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.398503 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f15d14-db12-4d91-af7b-d1b28395ec0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.659992 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerID="e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a" exitCode=0 Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.660035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerDied","Data":"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a"} Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.660064 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr7sb" event={"ID":"d8f15d14-db12-4d91-af7b-d1b28395ec0d","Type":"ContainerDied","Data":"85b30892063641bfb3055f226fd90fca12e0c89d1bb68820a8039a7f97472d8a"} Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.660084 4812 scope.go:117] "RemoveContainer" containerID="e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.660122 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr7sb" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.687138 4812 scope.go:117] "RemoveContainer" containerID="8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.704691 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.717352 4812 scope.go:117] "RemoveContainer" containerID="42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.718705 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lr7sb"] Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.880639 4812 scope.go:117] "RemoveContainer" containerID="e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a" Feb 18 17:26:51 crc kubenswrapper[4812]: E0218 17:26:51.881333 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a\": container with ID starting with e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a not found: ID does not exist" containerID="e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.881370 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a"} err="failed to get container status \"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a\": rpc error: code = NotFound desc = could not find container \"e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a\": container with ID starting with e047e22800adfe0111edcf0c6190d8f5978ac27c3726aef638d0bdf919304c3a not found: ID does not exist" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.881389 4812 scope.go:117] "RemoveContainer" containerID="8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700" Feb 18 17:26:51 crc kubenswrapper[4812]: E0218 17:26:51.881723 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700\": container with ID starting with 8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700 not found: ID does not exist" containerID="8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.881758 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700"} err="failed to get container status \"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700\": rpc error: code = NotFound desc = could not find container \"8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700\": container with ID starting with 8f435d8e0c25d0e827292174efe2d8c7000b7c1525f0c6a497d0691ad4b04700 not found: ID does not exist" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.881783 4812 scope.go:117] "RemoveContainer" containerID="42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29" Feb 18 17:26:51 crc kubenswrapper[4812]: E0218 17:26:51.882607 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29\": container with ID starting with 42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29 not found: ID does not exist" containerID="42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29" Feb 18 17:26:51 crc kubenswrapper[4812]: I0218 17:26:51.882634 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29"} err="failed to get container status \"42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29\": rpc error: code = NotFound desc = could not find container \"42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29\": container with ID starting with 42df4c9b1aad4d268bc08d3691e922f0bc1619796c7b44f133d84e01a41b2a29 not found: ID does not exist" Feb 18 17:26:52 crc kubenswrapper[4812]: I0218 17:26:52.520051 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" path="/var/lib/kubelet/pods/d8f15d14-db12-4d91-af7b-d1b28395ec0d/volumes" Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.413643 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.414223 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.414286 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.415130 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.415204 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702" gracePeriod=600 Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.764458 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702" exitCode=0 Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.764528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702"} Feb 18 17:27:03 crc kubenswrapper[4812]: I0218 17:27:03.764912 4812 scope.go:117] "RemoveContainer" containerID="e63feb18f8240c8e047e75ee7a418a3cd6ef8a3ecbb6d01c272e10c3506df086" Feb 18 17:27:04 crc kubenswrapper[4812]: I0218 17:27:04.776085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489"} Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.973036 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.973950 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="extract-utilities" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.973965 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="extract-utilities" Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.973989 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="extract-content" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.973996 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="extract-content" Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.974010 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974017 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.974036 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="extract-utilities" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974043 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="extract-utilities" Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.974055 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974061 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: E0218 17:27:47.974073 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="extract-content" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974078 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="extract-content" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974294 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebe0697-1728-4a0e-8931-277dacc24235" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.974321 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f15d14-db12-4d91-af7b-d1b28395ec0d" containerName="registry-server" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.975777 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:47 crc kubenswrapper[4812]: I0218 17:27:47.985796 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.081478 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.081540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srdf5\" (UniqueName: \"kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.081574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.184574 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srdf5\" (UniqueName: \"kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.184673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.185356 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.185704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.185959 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.207328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srdf5\" (UniqueName: \"kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5\") pod \"redhat-operators-gq9w7\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.312722 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:48 crc kubenswrapper[4812]: I0218 17:27:48.791625 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:27:49 crc kubenswrapper[4812]: I0218 17:27:49.230959 4812 generic.go:334] "Generic (PLEG): container finished" podID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerID="813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9" exitCode=0 Feb 18 17:27:49 crc kubenswrapper[4812]: I0218 17:27:49.231007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerDied","Data":"813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9"} Feb 18 17:27:49 crc kubenswrapper[4812]: I0218 17:27:49.231036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerStarted","Data":"fca57b315a81d840eb94f191dbd8cdd911b74bb716f24422b4c3658a4f4f1775"} Feb 18 17:27:49 crc kubenswrapper[4812]: I0218 17:27:49.233319 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:27:50 crc kubenswrapper[4812]: I0218 17:27:50.243072 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerStarted","Data":"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796"} Feb 18 17:27:56 crc kubenswrapper[4812]: I0218 17:27:56.298456 4812 generic.go:334] "Generic (PLEG): container finished" podID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerID="e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796" exitCode=0 Feb 18 17:27:56 crc kubenswrapper[4812]: I0218 17:27:56.298564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerDied","Data":"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796"} Feb 18 17:27:57 crc kubenswrapper[4812]: I0218 17:27:57.310393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerStarted","Data":"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8"} Feb 18 17:27:57 crc kubenswrapper[4812]: I0218 17:27:57.336546 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gq9w7" podStartSLOduration=2.870695517 podStartE2EDuration="10.336527662s" podCreationTimestamp="2026-02-18 17:27:47 +0000 UTC" firstStartedPulling="2026-02-18 17:27:49.233072659 +0000 UTC m=+3489.498683568" lastFinishedPulling="2026-02-18 17:27:56.698904804 +0000 UTC m=+3496.964515713" observedRunningTime="2026-02-18 17:27:57.330128312 +0000 UTC m=+3497.595739231" watchObservedRunningTime="2026-02-18 17:27:57.336527662 +0000 UTC m=+3497.602138571" Feb 18 17:27:58 crc kubenswrapper[4812]: I0218 17:27:58.313751 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:58 crc kubenswrapper[4812]: I0218 17:27:58.314171 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:27:59 crc kubenswrapper[4812]: I0218 17:27:59.361499 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:27:59 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:27:59 crc kubenswrapper[4812]: > Feb 18 17:28:09 crc kubenswrapper[4812]: I0218 17:28:09.363527 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:28:09 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:28:09 crc kubenswrapper[4812]: > Feb 18 17:28:19 crc kubenswrapper[4812]: I0218 17:28:19.364710 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:28:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:28:19 crc kubenswrapper[4812]: > Feb 18 17:28:29 crc kubenswrapper[4812]: I0218 17:28:29.364999 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:28:29 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:28:29 crc kubenswrapper[4812]: > Feb 18 17:28:39 crc kubenswrapper[4812]: I0218 17:28:39.399710 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" probeResult="failure" output=< Feb 18 17:28:39 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:28:39 crc kubenswrapper[4812]: > Feb 18 17:28:48 crc kubenswrapper[4812]: I0218 17:28:48.365049 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:28:48 crc kubenswrapper[4812]: I0218 17:28:48.413833 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:28:49 crc kubenswrapper[4812]: I0218 17:28:49.196383 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:28:49 crc kubenswrapper[4812]: I0218 17:28:49.791738 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gq9w7" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" containerID="cri-o://b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8" gracePeriod=2 Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.273596 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.379404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities\") pod \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.379539 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content\") pod \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.379568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srdf5\" (UniqueName: \"kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5\") pod \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\" (UID: \"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d\") " Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.380306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities" (OuterVolumeSpecName: "utilities") pod "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" (UID: "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.380527 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.388769 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5" (OuterVolumeSpecName: "kube-api-access-srdf5") pod "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" (UID: "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d"). InnerVolumeSpecName "kube-api-access-srdf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.482448 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srdf5\" (UniqueName: \"kubernetes.io/projected/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-kube-api-access-srdf5\") on node \"crc\" DevicePath \"\"" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.491273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" (UID: "a20a1bcc-d3a3-4f83-af51-d7da1f98e62d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.584497 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.804956 4812 generic.go:334] "Generic (PLEG): container finished" podID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerID="b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8" exitCode=0 Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.805007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerDied","Data":"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8"} Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.805014 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq9w7" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.805041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq9w7" event={"ID":"a20a1bcc-d3a3-4f83-af51-d7da1f98e62d","Type":"ContainerDied","Data":"fca57b315a81d840eb94f191dbd8cdd911b74bb716f24422b4c3658a4f4f1775"} Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.805063 4812 scope.go:117] "RemoveContainer" containerID="b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.829222 4812 scope.go:117] "RemoveContainer" containerID="e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.831468 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.842644 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gq9w7"] Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.854839 4812 scope.go:117] "RemoveContainer" containerID="813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.902709 4812 scope.go:117] "RemoveContainer" containerID="b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8" Feb 18 17:28:50 crc kubenswrapper[4812]: E0218 17:28:50.903248 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8\": container with ID starting with b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8 not found: ID does not exist" containerID="b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.903282 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8"} err="failed to get container status \"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8\": rpc error: code = NotFound desc = could not find container \"b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8\": container with ID starting with b09031a2d2cdc581741639a53b1265ed324e4e68b6424380b293c4494aa138d8 not found: ID does not exist" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.903305 4812 scope.go:117] "RemoveContainer" containerID="e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796" Feb 18 17:28:50 crc kubenswrapper[4812]: E0218 17:28:50.903553 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796\": container with ID starting with e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796 not found: ID does not exist" containerID="e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.903585 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796"} err="failed to get container status \"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796\": rpc error: code = NotFound desc = could not find container \"e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796\": container with ID starting with e87b1f51c7523b44c204fcba888a5e479545c780347a547f4004e9c6999f5796 not found: ID does not exist" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.903606 4812 scope.go:117] "RemoveContainer" containerID="813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9" Feb 18 17:28:50 crc kubenswrapper[4812]: E0218 17:28:50.903812 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9\": container with ID starting with 813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9 not found: ID does not exist" containerID="813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9" Feb 18 17:28:50 crc kubenswrapper[4812]: I0218 17:28:50.903837 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9"} err="failed to get container status \"813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9\": rpc error: code = NotFound desc = could not find container \"813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9\": container with ID starting with 813aded3d23e26e1c228c3e19ab1f0e11be31abc4d64b9da8b4a20627ed5e9a9 not found: ID does not exist" Feb 18 17:28:52 crc kubenswrapper[4812]: I0218 17:28:52.522660 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" path="/var/lib/kubelet/pods/a20a1bcc-d3a3-4f83-af51-d7da1f98e62d/volumes" Feb 18 17:29:03 crc kubenswrapper[4812]: I0218 17:29:03.414297 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:29:03 crc kubenswrapper[4812]: I0218 17:29:03.414805 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.485729 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:19 crc kubenswrapper[4812]: E0218 17:29:19.486711 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="extract-utilities" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.486728 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="extract-utilities" Feb 18 17:29:19 crc kubenswrapper[4812]: E0218 17:29:19.486767 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="extract-content" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.486775 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="extract-content" Feb 18 17:29:19 crc kubenswrapper[4812]: E0218 17:29:19.486788 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.486797 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.487014 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20a1bcc-d3a3-4f83-af51-d7da1f98e62d" containerName="registry-server" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.488536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.500000 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.647385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.647537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777s2\" (UniqueName: \"kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.647596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.749327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.749773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.750178 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.750447 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777s2\" (UniqueName: \"kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.750454 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.776253 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777s2\" (UniqueName: \"kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2\") pod \"redhat-marketplace-d82rr\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:19 crc kubenswrapper[4812]: I0218 17:29:19.809357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:20 crc kubenswrapper[4812]: I0218 17:29:20.244247 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:21 crc kubenswrapper[4812]: I0218 17:29:21.146431 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerID="9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60" exitCode=0 Feb 18 17:29:21 crc kubenswrapper[4812]: I0218 17:29:21.146504 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerDied","Data":"9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60"} Feb 18 17:29:21 crc kubenswrapper[4812]: I0218 17:29:21.146852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerStarted","Data":"8c665782453f1012558f161a4f9a27769ae7dbd7ab9643b89beab761fb2e36a4"} Feb 18 17:29:23 crc kubenswrapper[4812]: I0218 17:29:23.166862 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerStarted","Data":"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411"} Feb 18 17:29:25 crc kubenswrapper[4812]: I0218 17:29:25.188698 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerID="d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411" exitCode=0 Feb 18 17:29:25 crc kubenswrapper[4812]: I0218 17:29:25.188780 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerDied","Data":"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411"} Feb 18 17:29:28 crc kubenswrapper[4812]: I0218 17:29:28.223371 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerStarted","Data":"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057"} Feb 18 17:29:28 crc kubenswrapper[4812]: I0218 17:29:28.255075 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d82rr" podStartSLOduration=3.091611069 podStartE2EDuration="9.255045966s" podCreationTimestamp="2026-02-18 17:29:19 +0000 UTC" firstStartedPulling="2026-02-18 17:29:21.150017105 +0000 UTC m=+3581.415628014" lastFinishedPulling="2026-02-18 17:29:27.313451952 +0000 UTC m=+3587.579062911" observedRunningTime="2026-02-18 17:29:28.245204239 +0000 UTC m=+3588.510815158" watchObservedRunningTime="2026-02-18 17:29:28.255045966 +0000 UTC m=+3588.520656875" Feb 18 17:29:29 crc kubenswrapper[4812]: I0218 17:29:29.810547 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:29 crc kubenswrapper[4812]: I0218 17:29:29.810599 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:29 crc kubenswrapper[4812]: I0218 17:29:29.856340 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:33 crc kubenswrapper[4812]: I0218 17:29:33.413824 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:29:33 crc kubenswrapper[4812]: I0218 17:29:33.414203 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:29:39 crc kubenswrapper[4812]: I0218 17:29:39.871262 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:39 crc kubenswrapper[4812]: I0218 17:29:39.917242 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.325740 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d82rr" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="registry-server" containerID="cri-o://afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057" gracePeriod=2 Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.864574 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.941900 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content\") pod \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.942074 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities\") pod \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.942310 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777s2\" (UniqueName: \"kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2\") pod \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\" (UID: \"3e4214a9-74aa-4bc5-a0b6-b1220add579f\") " Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.943208 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities" (OuterVolumeSpecName: "utilities") pod "3e4214a9-74aa-4bc5-a0b6-b1220add579f" (UID: "3e4214a9-74aa-4bc5-a0b6-b1220add579f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.953270 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2" (OuterVolumeSpecName: "kube-api-access-777s2") pod "3e4214a9-74aa-4bc5-a0b6-b1220add579f" (UID: "3e4214a9-74aa-4bc5-a0b6-b1220add579f"). InnerVolumeSpecName "kube-api-access-777s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:29:40 crc kubenswrapper[4812]: I0218 17:29:40.986051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e4214a9-74aa-4bc5-a0b6-b1220add579f" (UID: "3e4214a9-74aa-4bc5-a0b6-b1220add579f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.044711 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-777s2\" (UniqueName: \"kubernetes.io/projected/3e4214a9-74aa-4bc5-a0b6-b1220add579f-kube-api-access-777s2\") on node \"crc\" DevicePath \"\"" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.044744 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.044752 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e4214a9-74aa-4bc5-a0b6-b1220add579f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.338689 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerID="afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057" exitCode=0 Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.338755 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d82rr" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.338779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerDied","Data":"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057"} Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.339348 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d82rr" event={"ID":"3e4214a9-74aa-4bc5-a0b6-b1220add579f","Type":"ContainerDied","Data":"8c665782453f1012558f161a4f9a27769ae7dbd7ab9643b89beab761fb2e36a4"} Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.339365 4812 scope.go:117] "RemoveContainer" containerID="afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.359976 4812 scope.go:117] "RemoveContainer" containerID="d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.387349 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.398826 4812 scope.go:117] "RemoveContainer" containerID="9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.409360 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d82rr"] Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.433247 4812 scope.go:117] "RemoveContainer" containerID="afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057" Feb 18 17:29:41 crc kubenswrapper[4812]: E0218 17:29:41.433631 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057\": container with ID starting with afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057 not found: ID does not exist" containerID="afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.433721 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057"} err="failed to get container status \"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057\": rpc error: code = NotFound desc = could not find container \"afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057\": container with ID starting with afa8a4e79587c41ca9c59056231e80a14589ab239d96d4dcbee9eb9f5dbf5057 not found: ID does not exist" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.433753 4812 scope.go:117] "RemoveContainer" containerID="d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411" Feb 18 17:29:41 crc kubenswrapper[4812]: E0218 17:29:41.434237 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411\": container with ID starting with d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411 not found: ID does not exist" containerID="d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.434292 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411"} err="failed to get container status \"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411\": rpc error: code = NotFound desc = could not find container \"d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411\": container with ID starting with d98c3558ed39f13c3cf4fadb89357075a7801ab104bd990b04f6a57b6f4ff411 not found: ID does not exist" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.434315 4812 scope.go:117] "RemoveContainer" containerID="9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60" Feb 18 17:29:41 crc kubenswrapper[4812]: E0218 17:29:41.434731 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60\": container with ID starting with 9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60 not found: ID does not exist" containerID="9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60" Feb 18 17:29:41 crc kubenswrapper[4812]: I0218 17:29:41.434754 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60"} err="failed to get container status \"9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60\": rpc error: code = NotFound desc = could not find container \"9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60\": container with ID starting with 9f376e79b0a4f6b3cdabcc86b77502b306a91e11b3ac292edacd1ac7e594ff60 not found: ID does not exist" Feb 18 17:29:42 crc kubenswrapper[4812]: I0218 17:29:42.518996 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" path="/var/lib/kubelet/pods/3e4214a9-74aa-4bc5-a0b6-b1220add579f/volumes" Feb 18 17:29:51 crc kubenswrapper[4812]: E0218 17:29:51.767080 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:54566->38.102.83.106:36505: write tcp 38.102.83.106:54566->38.102.83.106:36505: write: broken pipe Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.157595 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx"] Feb 18 17:30:00 crc kubenswrapper[4812]: E0218 17:30:00.159945 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="extract-content" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.160003 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="extract-content" Feb 18 17:30:00 crc kubenswrapper[4812]: E0218 17:30:00.160023 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="extract-utilities" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.160030 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="extract-utilities" Feb 18 17:30:00 crc kubenswrapper[4812]: E0218 17:30:00.160038 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="registry-server" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.160044 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="registry-server" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.160342 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4214a9-74aa-4bc5-a0b6-b1220add579f" containerName="registry-server" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.161469 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.164256 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.164480 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.171453 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx"] Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.326557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.326622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmzl\" (UniqueName: \"kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.326737 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.429681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.430012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.430116 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mmzl\" (UniqueName: \"kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.431752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.441421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.449519 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mmzl\" (UniqueName: \"kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl\") pod \"collect-profiles-29523930-r8sqx\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.486242 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:00 crc kubenswrapper[4812]: I0218 17:30:00.939426 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx"] Feb 18 17:30:01 crc kubenswrapper[4812]: I0218 17:30:01.529909 4812 generic.go:334] "Generic (PLEG): container finished" podID="33276d0b-5395-4f39-bad7-59d433ef97e2" containerID="32c39120edc77fb0b207b6e80ab09b19626ed26382f57aee5d42a82b4205c402" exitCode=0 Feb 18 17:30:01 crc kubenswrapper[4812]: I0218 17:30:01.530012 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" event={"ID":"33276d0b-5395-4f39-bad7-59d433ef97e2","Type":"ContainerDied","Data":"32c39120edc77fb0b207b6e80ab09b19626ed26382f57aee5d42a82b4205c402"} Feb 18 17:30:01 crc kubenswrapper[4812]: I0218 17:30:01.530218 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" event={"ID":"33276d0b-5395-4f39-bad7-59d433ef97e2","Type":"ContainerStarted","Data":"55261ce8a96d146c9d3d97692cb86bba2f1347aede10b39633b70bfb2c4a3e04"} Feb 18 17:30:02 crc kubenswrapper[4812]: I0218 17:30:02.974618 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.093451 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume\") pod \"33276d0b-5395-4f39-bad7-59d433ef97e2\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.093942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mmzl\" (UniqueName: \"kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl\") pod \"33276d0b-5395-4f39-bad7-59d433ef97e2\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.094050 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume\") pod \"33276d0b-5395-4f39-bad7-59d433ef97e2\" (UID: \"33276d0b-5395-4f39-bad7-59d433ef97e2\") " Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.095017 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume" (OuterVolumeSpecName: "config-volume") pod "33276d0b-5395-4f39-bad7-59d433ef97e2" (UID: "33276d0b-5395-4f39-bad7-59d433ef97e2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.102427 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "33276d0b-5395-4f39-bad7-59d433ef97e2" (UID: "33276d0b-5395-4f39-bad7-59d433ef97e2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.103327 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl" (OuterVolumeSpecName: "kube-api-access-8mmzl") pod "33276d0b-5395-4f39-bad7-59d433ef97e2" (UID: "33276d0b-5395-4f39-bad7-59d433ef97e2"). InnerVolumeSpecName "kube-api-access-8mmzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.196688 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33276d0b-5395-4f39-bad7-59d433ef97e2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.196725 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mmzl\" (UniqueName: \"kubernetes.io/projected/33276d0b-5395-4f39-bad7-59d433ef97e2-kube-api-access-8mmzl\") on node \"crc\" DevicePath \"\"" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.196735 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33276d0b-5395-4f39-bad7-59d433ef97e2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.413496 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.413591 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.413667 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.415015 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.415162 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" gracePeriod=600 Feb 18 17:30:03 crc kubenswrapper[4812]: E0218 17:30:03.536050 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.551385 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" exitCode=0 Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.551453 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489"} Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.551875 4812 scope.go:117] "RemoveContainer" containerID="00a52704db1bdd8a5a3b7b54008b2b319dec1cc6c628bcc7e1d536759fede702" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.552555 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:30:03 crc kubenswrapper[4812]: E0218 17:30:03.552806 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.553369 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" event={"ID":"33276d0b-5395-4f39-bad7-59d433ef97e2","Type":"ContainerDied","Data":"55261ce8a96d146c9d3d97692cb86bba2f1347aede10b39633b70bfb2c4a3e04"} Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.553390 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55261ce8a96d146c9d3d97692cb86bba2f1347aede10b39633b70bfb2c4a3e04" Feb 18 17:30:03 crc kubenswrapper[4812]: I0218 17:30:03.553425 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523930-r8sqx" Feb 18 17:30:04 crc kubenswrapper[4812]: I0218 17:30:04.050839 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs"] Feb 18 17:30:04 crc kubenswrapper[4812]: I0218 17:30:04.059559 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523885-hzpcs"] Feb 18 17:30:04 crc kubenswrapper[4812]: I0218 17:30:04.523203 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04518de5-df9f-4e43-b939-89cbfc52a56a" path="/var/lib/kubelet/pods/04518de5-df9f-4e43-b939-89cbfc52a56a/volumes" Feb 18 17:30:16 crc kubenswrapper[4812]: I0218 17:30:16.277622 4812 scope.go:117] "RemoveContainer" containerID="f4b92cf5ee3c2f5c85c6713a1ba81afe2e1b4582dbc7554bf990d81173df003f" Feb 18 17:30:16 crc kubenswrapper[4812]: I0218 17:30:16.508780 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:30:16 crc kubenswrapper[4812]: E0218 17:30:16.509780 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:30:28 crc kubenswrapper[4812]: I0218 17:30:28.508818 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:30:28 crc kubenswrapper[4812]: E0218 17:30:28.509772 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:30:40 crc kubenswrapper[4812]: I0218 17:30:40.515870 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:30:40 crc kubenswrapper[4812]: E0218 17:30:40.516788 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:30:53 crc kubenswrapper[4812]: I0218 17:30:53.508754 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:30:53 crc kubenswrapper[4812]: E0218 17:30:53.509568 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:31:07 crc kubenswrapper[4812]: I0218 17:31:07.509148 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:31:07 crc kubenswrapper[4812]: E0218 17:31:07.509944 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:31:20 crc kubenswrapper[4812]: I0218 17:31:20.516757 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:31:20 crc kubenswrapper[4812]: E0218 17:31:20.517654 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:31:33 crc kubenswrapper[4812]: I0218 17:31:33.507737 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:31:33 crc kubenswrapper[4812]: E0218 17:31:33.508458 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:31:44 crc kubenswrapper[4812]: I0218 17:31:44.508590 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:31:44 crc kubenswrapper[4812]: E0218 17:31:44.509314 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:31:59 crc kubenswrapper[4812]: I0218 17:31:59.508314 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:31:59 crc kubenswrapper[4812]: E0218 17:31:59.509114 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:32:10 crc kubenswrapper[4812]: I0218 17:32:10.508879 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:32:10 crc kubenswrapper[4812]: E0218 17:32:10.510742 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:32:23 crc kubenswrapper[4812]: I0218 17:32:23.509338 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:32:23 crc kubenswrapper[4812]: E0218 17:32:23.512013 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:32:34 crc kubenswrapper[4812]: I0218 17:32:34.508957 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:32:34 crc kubenswrapper[4812]: E0218 17:32:34.509824 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:32:46 crc kubenswrapper[4812]: I0218 17:32:46.508809 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:32:46 crc kubenswrapper[4812]: E0218 17:32:46.510300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:32:58 crc kubenswrapper[4812]: I0218 17:32:58.508835 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:32:58 crc kubenswrapper[4812]: E0218 17:32:58.509587 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:33:12 crc kubenswrapper[4812]: I0218 17:33:12.507994 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:33:12 crc kubenswrapper[4812]: E0218 17:33:12.508855 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:33:25 crc kubenswrapper[4812]: I0218 17:33:25.508837 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:33:25 crc kubenswrapper[4812]: E0218 17:33:25.509560 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:33:36 crc kubenswrapper[4812]: I0218 17:33:36.508257 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:33:36 crc kubenswrapper[4812]: E0218 17:33:36.508879 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:33:50 crc kubenswrapper[4812]: I0218 17:33:50.513624 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:33:50 crc kubenswrapper[4812]: E0218 17:33:50.514551 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:34:04 crc kubenswrapper[4812]: I0218 17:34:04.512520 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:34:04 crc kubenswrapper[4812]: E0218 17:34:04.513298 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:34:15 crc kubenswrapper[4812]: I0218 17:34:15.509017 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:34:15 crc kubenswrapper[4812]: E0218 17:34:15.509930 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:34:26 crc kubenswrapper[4812]: I0218 17:34:26.507826 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:34:26 crc kubenswrapper[4812]: E0218 17:34:26.508575 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:34:38 crc kubenswrapper[4812]: I0218 17:34:38.508052 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:34:38 crc kubenswrapper[4812]: E0218 17:34:38.508846 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:34:50 crc kubenswrapper[4812]: I0218 17:34:50.516947 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:34:50 crc kubenswrapper[4812]: E0218 17:34:50.518317 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:35:02 crc kubenswrapper[4812]: I0218 17:35:02.507705 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:35:02 crc kubenswrapper[4812]: E0218 17:35:02.508384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:35:13 crc kubenswrapper[4812]: I0218 17:35:13.507750 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:35:13 crc kubenswrapper[4812]: I0218 17:35:13.740674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407"} Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.027075 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:08 crc kubenswrapper[4812]: E0218 17:37:08.028231 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33276d0b-5395-4f39-bad7-59d433ef97e2" containerName="collect-profiles" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.028251 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="33276d0b-5395-4f39-bad7-59d433ef97e2" containerName="collect-profiles" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.028499 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="33276d0b-5395-4f39-bad7-59d433ef97e2" containerName="collect-profiles" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.030238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.041352 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.100259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.100325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.100411 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vp7\" (UniqueName: \"kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.202162 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vp7\" (UniqueName: \"kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.202295 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.202364 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.203076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.203076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.225251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vp7\" (UniqueName: \"kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7\") pod \"certified-operators-p95vr\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.352439 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:08 crc kubenswrapper[4812]: I0218 17:37:08.866957 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:09 crc kubenswrapper[4812]: I0218 17:37:09.813385 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerID="fc50ecf1a17dae1faebe863e3d4273d730fe8e96e85cf066b94d708a32a4035e" exitCode=0 Feb 18 17:37:09 crc kubenswrapper[4812]: I0218 17:37:09.813460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerDied","Data":"fc50ecf1a17dae1faebe863e3d4273d730fe8e96e85cf066b94d708a32a4035e"} Feb 18 17:37:09 crc kubenswrapper[4812]: I0218 17:37:09.813709 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerStarted","Data":"124205665d3a3bb237b19031110b0f8d2dad1b053f784fea7a450782c64dcdf4"} Feb 18 17:37:09 crc kubenswrapper[4812]: I0218 17:37:09.816077 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:37:10 crc kubenswrapper[4812]: I0218 17:37:10.825682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerStarted","Data":"3448b637399e5c25ab7706cdfe5ab8bb83548ece924599967364d0647b1bc27c"} Feb 18 17:37:12 crc kubenswrapper[4812]: I0218 17:37:12.850887 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerID="3448b637399e5c25ab7706cdfe5ab8bb83548ece924599967364d0647b1bc27c" exitCode=0 Feb 18 17:37:12 crc kubenswrapper[4812]: I0218 17:37:12.850971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerDied","Data":"3448b637399e5c25ab7706cdfe5ab8bb83548ece924599967364d0647b1bc27c"} Feb 18 17:37:13 crc kubenswrapper[4812]: I0218 17:37:13.861695 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerStarted","Data":"c8eab6e99536a91cb30d67e78410ca989bb5c76579c63c28eeff091f996d607b"} Feb 18 17:37:13 crc kubenswrapper[4812]: I0218 17:37:13.899056 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p95vr" podStartSLOduration=2.454526318 podStartE2EDuration="5.899031792s" podCreationTimestamp="2026-02-18 17:37:08 +0000 UTC" firstStartedPulling="2026-02-18 17:37:09.81584026 +0000 UTC m=+4050.081451169" lastFinishedPulling="2026-02-18 17:37:13.260345734 +0000 UTC m=+4053.525956643" observedRunningTime="2026-02-18 17:37:13.881047996 +0000 UTC m=+4054.146658915" watchObservedRunningTime="2026-02-18 17:37:13.899031792 +0000 UTC m=+4054.164642701" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.212943 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.215962 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.240918 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.292614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hml7z\" (UniqueName: \"kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.292896 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.293165 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.395535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hml7z\" (UniqueName: \"kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.395674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.395786 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.396393 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.396485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.424441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hml7z\" (UniqueName: \"kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z\") pod \"community-operators-2lbrg\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:17 crc kubenswrapper[4812]: I0218 17:37:17.538670 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.114472 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:18 crc kubenswrapper[4812]: W0218 17:37:18.116155 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00535bc2_17c6_43eb_a11d_9a5ae0c2fee1.slice/crio-ce61d6dfa694dc109bfd7a5c5907f818ea2633d5165052f0fd3112ffd610fe4f WatchSource:0}: Error finding container ce61d6dfa694dc109bfd7a5c5907f818ea2633d5165052f0fd3112ffd610fe4f: Status 404 returned error can't find the container with id ce61d6dfa694dc109bfd7a5c5907f818ea2633d5165052f0fd3112ffd610fe4f Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.353295 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.353352 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.403179 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.914515 4812 generic.go:334] "Generic (PLEG): container finished" podID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerID="e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f" exitCode=0 Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.914635 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerDied","Data":"e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f"} Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.914694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerStarted","Data":"ce61d6dfa694dc109bfd7a5c5907f818ea2633d5165052f0fd3112ffd610fe4f"} Feb 18 17:37:18 crc kubenswrapper[4812]: I0218 17:37:18.962928 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:20 crc kubenswrapper[4812]: I0218 17:37:20.806864 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:20 crc kubenswrapper[4812]: I0218 17:37:20.931400 4812 generic.go:334] "Generic (PLEG): container finished" podID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerID="17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4" exitCode=0 Feb 18 17:37:20 crc kubenswrapper[4812]: I0218 17:37:20.931497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerDied","Data":"17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4"} Feb 18 17:37:20 crc kubenswrapper[4812]: I0218 17:37:20.931720 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p95vr" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="registry-server" containerID="cri-o://c8eab6e99536a91cb30d67e78410ca989bb5c76579c63c28eeff091f996d607b" gracePeriod=2 Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.943388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerStarted","Data":"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891"} Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.946048 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerID="c8eab6e99536a91cb30d67e78410ca989bb5c76579c63c28eeff091f996d607b" exitCode=0 Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.946090 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerDied","Data":"c8eab6e99536a91cb30d67e78410ca989bb5c76579c63c28eeff091f996d607b"} Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.946144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p95vr" event={"ID":"0e32b40c-ec6e-44fa-9821-984b284cc619","Type":"ContainerDied","Data":"124205665d3a3bb237b19031110b0f8d2dad1b053f784fea7a450782c64dcdf4"} Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.946160 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124205665d3a3bb237b19031110b0f8d2dad1b053f784fea7a450782c64dcdf4" Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.951940 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:21 crc kubenswrapper[4812]: I0218 17:37:21.965295 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2lbrg" podStartSLOduration=2.564881866 podStartE2EDuration="4.965279161s" podCreationTimestamp="2026-02-18 17:37:17 +0000 UTC" firstStartedPulling="2026-02-18 17:37:18.916024239 +0000 UTC m=+4059.181635148" lastFinishedPulling="2026-02-18 17:37:21.316421524 +0000 UTC m=+4061.582032443" observedRunningTime="2026-02-18 17:37:21.958380426 +0000 UTC m=+4062.223991335" watchObservedRunningTime="2026-02-18 17:37:21.965279161 +0000 UTC m=+4062.230890070" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.098451 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vp7\" (UniqueName: \"kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7\") pod \"0e32b40c-ec6e-44fa-9821-984b284cc619\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.098548 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content\") pod \"0e32b40c-ec6e-44fa-9821-984b284cc619\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.098606 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities\") pod \"0e32b40c-ec6e-44fa-9821-984b284cc619\" (UID: \"0e32b40c-ec6e-44fa-9821-984b284cc619\") " Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.099505 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities" (OuterVolumeSpecName: "utilities") pod "0e32b40c-ec6e-44fa-9821-984b284cc619" (UID: "0e32b40c-ec6e-44fa-9821-984b284cc619"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.105786 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7" (OuterVolumeSpecName: "kube-api-access-j9vp7") pod "0e32b40c-ec6e-44fa-9821-984b284cc619" (UID: "0e32b40c-ec6e-44fa-9821-984b284cc619"). InnerVolumeSpecName "kube-api-access-j9vp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.147075 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e32b40c-ec6e-44fa-9821-984b284cc619" (UID: "0e32b40c-ec6e-44fa-9821-984b284cc619"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.200688 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9vp7\" (UniqueName: \"kubernetes.io/projected/0e32b40c-ec6e-44fa-9821-984b284cc619-kube-api-access-j9vp7\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.200716 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.200727 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e32b40c-ec6e-44fa-9821-984b284cc619-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.954315 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p95vr" Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.975530 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:22 crc kubenswrapper[4812]: I0218 17:37:22.983632 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p95vr"] Feb 18 17:37:24 crc kubenswrapper[4812]: I0218 17:37:24.520595 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" path="/var/lib/kubelet/pods/0e32b40c-ec6e-44fa-9821-984b284cc619/volumes" Feb 18 17:37:27 crc kubenswrapper[4812]: I0218 17:37:27.539177 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:27 crc kubenswrapper[4812]: I0218 17:37:27.539751 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:27 crc kubenswrapper[4812]: I0218 17:37:27.584085 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:28 crc kubenswrapper[4812]: I0218 17:37:28.097690 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:28 crc kubenswrapper[4812]: I0218 17:37:28.227030 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.007014 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2lbrg" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="registry-server" containerID="cri-o://6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891" gracePeriod=2 Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.451383 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.547242 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities\") pod \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.547558 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content\") pod \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.547645 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hml7z\" (UniqueName: \"kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z\") pod \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\" (UID: \"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1\") " Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.548198 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities" (OuterVolumeSpecName: "utilities") pod "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" (UID: "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.548795 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.554419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z" (OuterVolumeSpecName: "kube-api-access-hml7z") pod "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" (UID: "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1"). InnerVolumeSpecName "kube-api-access-hml7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.605592 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" (UID: "00535bc2-17c6-43eb-a11d-9a5ae0c2fee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.650363 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:30 crc kubenswrapper[4812]: I0218 17:37:30.650397 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hml7z\" (UniqueName: \"kubernetes.io/projected/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1-kube-api-access-hml7z\") on node \"crc\" DevicePath \"\"" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.017419 4812 generic.go:334] "Generic (PLEG): container finished" podID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerID="6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891" exitCode=0 Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.017457 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerDied","Data":"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891"} Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.017469 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lbrg" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.017492 4812 scope.go:117] "RemoveContainer" containerID="6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.017481 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lbrg" event={"ID":"00535bc2-17c6-43eb-a11d-9a5ae0c2fee1","Type":"ContainerDied","Data":"ce61d6dfa694dc109bfd7a5c5907f818ea2633d5165052f0fd3112ffd610fe4f"} Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.038658 4812 scope.go:117] "RemoveContainer" containerID="17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.049257 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.056964 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2lbrg"] Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.088301 4812 scope.go:117] "RemoveContainer" containerID="e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.116542 4812 scope.go:117] "RemoveContainer" containerID="6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891" Feb 18 17:37:31 crc kubenswrapper[4812]: E0218 17:37:31.117032 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891\": container with ID starting with 6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891 not found: ID does not exist" containerID="6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.117081 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891"} err="failed to get container status \"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891\": rpc error: code = NotFound desc = could not find container \"6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891\": container with ID starting with 6694a2c9efb09c3d5d1276954739062b86c5f6f407cc5ea6fc111ff3a111c891 not found: ID does not exist" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.117122 4812 scope.go:117] "RemoveContainer" containerID="17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4" Feb 18 17:37:31 crc kubenswrapper[4812]: E0218 17:37:31.117592 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4\": container with ID starting with 17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4 not found: ID does not exist" containerID="17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.117649 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4"} err="failed to get container status \"17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4\": rpc error: code = NotFound desc = could not find container \"17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4\": container with ID starting with 17ee2b64eafd2dfcce31a227b185da16b40c397f509291946f7db7e5f3e46da4 not found: ID does not exist" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.117684 4812 scope.go:117] "RemoveContainer" containerID="e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f" Feb 18 17:37:31 crc kubenswrapper[4812]: E0218 17:37:31.118125 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f\": container with ID starting with e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f not found: ID does not exist" containerID="e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f" Feb 18 17:37:31 crc kubenswrapper[4812]: I0218 17:37:31.118159 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f"} err="failed to get container status \"e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f\": rpc error: code = NotFound desc = could not find container \"e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f\": container with ID starting with e38fa3d83310e3bd728390762d6900994ccda26c6c7b19d45ac91e36e964ef8f not found: ID does not exist" Feb 18 17:37:32 crc kubenswrapper[4812]: I0218 17:37:32.519852 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" path="/var/lib/kubelet/pods/00535bc2-17c6-43eb-a11d-9a5ae0c2fee1/volumes" Feb 18 17:37:33 crc kubenswrapper[4812]: I0218 17:37:33.413937 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:37:33 crc kubenswrapper[4812]: I0218 17:37:33.414272 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:38:03 crc kubenswrapper[4812]: I0218 17:38:03.414318 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:38:03 crc kubenswrapper[4812]: I0218 17:38:03.414935 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.170954 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.171923 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.171943 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.171964 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="extract-content" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.171974 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="extract-content" Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.171991 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172000 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.172013 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="extract-utilities" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172021 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="extract-utilities" Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.172041 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="extract-content" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172049 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="extract-content" Feb 18 17:38:06 crc kubenswrapper[4812]: E0218 17:38:06.172065 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="extract-utilities" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172073 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="extract-utilities" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172386 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e32b40c-ec6e-44fa-9821-984b284cc619" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.172442 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="00535bc2-17c6-43eb-a11d-9a5ae0c2fee1" containerName="registry-server" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.174288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.181285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.263423 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzrd\" (UniqueName: \"kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.263528 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.263550 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.365213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjzrd\" (UniqueName: \"kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.365293 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.365314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.365744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.365875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.398452 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjzrd\" (UniqueName: \"kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd\") pod \"redhat-operators-zkgzt\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:06 crc kubenswrapper[4812]: I0218 17:38:06.502569 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:07 crc kubenswrapper[4812]: I0218 17:38:07.021471 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:08 crc kubenswrapper[4812]: I0218 17:38:08.379748 4812 generic.go:334] "Generic (PLEG): container finished" podID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerID="9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc" exitCode=0 Feb 18 17:38:08 crc kubenswrapper[4812]: I0218 17:38:08.379851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerDied","Data":"9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc"} Feb 18 17:38:08 crc kubenswrapper[4812]: I0218 17:38:08.380278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerStarted","Data":"f92ad7120b8ba97cc56567a141eac201355abca3a1441f258ae9d9ba59f56dfc"} Feb 18 17:38:09 crc kubenswrapper[4812]: I0218 17:38:09.393174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerStarted","Data":"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892"} Feb 18 17:38:15 crc kubenswrapper[4812]: I0218 17:38:15.450455 4812 generic.go:334] "Generic (PLEG): container finished" podID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerID="5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892" exitCode=0 Feb 18 17:38:15 crc kubenswrapper[4812]: I0218 17:38:15.450569 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerDied","Data":"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892"} Feb 18 17:38:16 crc kubenswrapper[4812]: I0218 17:38:16.461572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerStarted","Data":"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3"} Feb 18 17:38:16 crc kubenswrapper[4812]: I0218 17:38:16.485622 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkgzt" podStartSLOduration=3.052094085 podStartE2EDuration="10.485599297s" podCreationTimestamp="2026-02-18 17:38:06 +0000 UTC" firstStartedPulling="2026-02-18 17:38:08.381869227 +0000 UTC m=+4108.647480136" lastFinishedPulling="2026-02-18 17:38:15.815374419 +0000 UTC m=+4116.080985348" observedRunningTime="2026-02-18 17:38:16.480273032 +0000 UTC m=+4116.745883961" watchObservedRunningTime="2026-02-18 17:38:16.485599297 +0000 UTC m=+4116.751210206" Feb 18 17:38:16 crc kubenswrapper[4812]: I0218 17:38:16.503283 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:16 crc kubenswrapper[4812]: I0218 17:38:16.503445 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:17 crc kubenswrapper[4812]: I0218 17:38:17.567615 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkgzt" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="registry-server" probeResult="failure" output=< Feb 18 17:38:17 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:38:17 crc kubenswrapper[4812]: > Feb 18 17:38:26 crc kubenswrapper[4812]: I0218 17:38:26.551349 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:26 crc kubenswrapper[4812]: I0218 17:38:26.622842 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:27 crc kubenswrapper[4812]: I0218 17:38:27.673811 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:28 crc kubenswrapper[4812]: I0218 17:38:28.572868 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkgzt" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="registry-server" containerID="cri-o://2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3" gracePeriod=2 Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.160457 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.250847 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content\") pod \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.251112 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities\") pod \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.251140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjzrd\" (UniqueName: \"kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd\") pod \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\" (UID: \"d2fd3c50-7725-4430-a7de-ed303b8ebed6\") " Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.253062 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities" (OuterVolumeSpecName: "utilities") pod "d2fd3c50-7725-4430-a7de-ed303b8ebed6" (UID: "d2fd3c50-7725-4430-a7de-ed303b8ebed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.258487 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd" (OuterVolumeSpecName: "kube-api-access-xjzrd") pod "d2fd3c50-7725-4430-a7de-ed303b8ebed6" (UID: "d2fd3c50-7725-4430-a7de-ed303b8ebed6"). InnerVolumeSpecName "kube-api-access-xjzrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.356441 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.356484 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjzrd\" (UniqueName: \"kubernetes.io/projected/d2fd3c50-7725-4430-a7de-ed303b8ebed6-kube-api-access-xjzrd\") on node \"crc\" DevicePath \"\"" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.436428 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2fd3c50-7725-4430-a7de-ed303b8ebed6" (UID: "d2fd3c50-7725-4430-a7de-ed303b8ebed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.458496 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2fd3c50-7725-4430-a7de-ed303b8ebed6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.589740 4812 generic.go:334] "Generic (PLEG): container finished" podID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerID="2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3" exitCode=0 Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.589807 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerDied","Data":"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3"} Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.589857 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkgzt" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.590447 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkgzt" event={"ID":"d2fd3c50-7725-4430-a7de-ed303b8ebed6","Type":"ContainerDied","Data":"f92ad7120b8ba97cc56567a141eac201355abca3a1441f258ae9d9ba59f56dfc"} Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.590622 4812 scope.go:117] "RemoveContainer" containerID="2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.625562 4812 scope.go:117] "RemoveContainer" containerID="5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.630642 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.644074 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkgzt"] Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.648829 4812 scope.go:117] "RemoveContainer" containerID="9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.694443 4812 scope.go:117] "RemoveContainer" containerID="2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3" Feb 18 17:38:29 crc kubenswrapper[4812]: E0218 17:38:29.696469 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3\": container with ID starting with 2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3 not found: ID does not exist" containerID="2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.696527 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3"} err="failed to get container status \"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3\": rpc error: code = NotFound desc = could not find container \"2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3\": container with ID starting with 2937ba13b4823a74c78a5cb5873381419da199799dc0d1ef3b382858763fe9e3 not found: ID does not exist" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.696569 4812 scope.go:117] "RemoveContainer" containerID="5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892" Feb 18 17:38:29 crc kubenswrapper[4812]: E0218 17:38:29.696879 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892\": container with ID starting with 5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892 not found: ID does not exist" containerID="5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.696911 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892"} err="failed to get container status \"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892\": rpc error: code = NotFound desc = could not find container \"5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892\": container with ID starting with 5b0212b6b467026f60f06665f7d3cd5e15208618ad1c296e8681f7cb504ca892 not found: ID does not exist" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.696931 4812 scope.go:117] "RemoveContainer" containerID="9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc" Feb 18 17:38:29 crc kubenswrapper[4812]: E0218 17:38:29.697191 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc\": container with ID starting with 9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc not found: ID does not exist" containerID="9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc" Feb 18 17:38:29 crc kubenswrapper[4812]: I0218 17:38:29.697222 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc"} err="failed to get container status \"9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc\": rpc error: code = NotFound desc = could not find container \"9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc\": container with ID starting with 9a91016532363d01e06ce109434183c62a18d42c353b4807924ad61dea363dfc not found: ID does not exist" Feb 18 17:38:30 crc kubenswrapper[4812]: I0218 17:38:30.526066 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" path="/var/lib/kubelet/pods/d2fd3c50-7725-4430-a7de-ed303b8ebed6/volumes" Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.414346 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.414917 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.414973 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.415937 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.416022 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407" gracePeriod=600 Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.671640 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407" exitCode=0 Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.671733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407"} Feb 18 17:38:33 crc kubenswrapper[4812]: I0218 17:38:33.672305 4812 scope.go:117] "RemoveContainer" containerID="67e2830afe52f5044382d97ff7aa124642b62fcdaebbfc235b55d8fff4a80489" Feb 18 17:38:34 crc kubenswrapper[4812]: I0218 17:38:34.704810 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874"} Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.159085 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:33 crc kubenswrapper[4812]: E0218 17:39:33.160353 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="extract-utilities" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.160372 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="extract-utilities" Feb 18 17:39:33 crc kubenswrapper[4812]: E0218 17:39:33.160432 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="extract-content" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.160441 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="extract-content" Feb 18 17:39:33 crc kubenswrapper[4812]: E0218 17:39:33.160454 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="registry-server" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.160463 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="registry-server" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.160716 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fd3c50-7725-4430-a7de-ed303b8ebed6" containerName="registry-server" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.162786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.173844 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.269076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcjb\" (UniqueName: \"kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.269144 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.269537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.371229 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.371641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcjb\" (UniqueName: \"kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.371667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.371980 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.372124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.393684 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcjb\" (UniqueName: \"kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb\") pod \"redhat-marketplace-k6j8x\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.499873 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:33 crc kubenswrapper[4812]: I0218 17:39:33.986515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:34 crc kubenswrapper[4812]: I0218 17:39:34.231628 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerStarted","Data":"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a"} Feb 18 17:39:34 crc kubenswrapper[4812]: I0218 17:39:34.231675 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerStarted","Data":"c4f4eadbb1897899bbecb8baccb1e4a8a33cedc0865cf6836b355a8218a4ff12"} Feb 18 17:39:35 crc kubenswrapper[4812]: I0218 17:39:35.244352 4812 generic.go:334] "Generic (PLEG): container finished" podID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerID="0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a" exitCode=0 Feb 18 17:39:35 crc kubenswrapper[4812]: I0218 17:39:35.244431 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerDied","Data":"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a"} Feb 18 17:39:36 crc kubenswrapper[4812]: I0218 17:39:36.263406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerStarted","Data":"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01"} Feb 18 17:39:37 crc kubenswrapper[4812]: I0218 17:39:37.274173 4812 generic.go:334] "Generic (PLEG): container finished" podID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerID="6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01" exitCode=0 Feb 18 17:39:37 crc kubenswrapper[4812]: I0218 17:39:37.274387 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerDied","Data":"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01"} Feb 18 17:39:38 crc kubenswrapper[4812]: I0218 17:39:38.286911 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerStarted","Data":"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af"} Feb 18 17:39:38 crc kubenswrapper[4812]: I0218 17:39:38.310493 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6j8x" podStartSLOduration=2.880347535 podStartE2EDuration="5.310474503s" podCreationTimestamp="2026-02-18 17:39:33 +0000 UTC" firstStartedPulling="2026-02-18 17:39:35.245693068 +0000 UTC m=+4195.511303977" lastFinishedPulling="2026-02-18 17:39:37.675820026 +0000 UTC m=+4197.941430945" observedRunningTime="2026-02-18 17:39:38.304611044 +0000 UTC m=+4198.570221963" watchObservedRunningTime="2026-02-18 17:39:38.310474503 +0000 UTC m=+4198.576085412" Feb 18 17:39:43 crc kubenswrapper[4812]: I0218 17:39:43.500716 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:43 crc kubenswrapper[4812]: I0218 17:39:43.501291 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:43 crc kubenswrapper[4812]: I0218 17:39:43.544904 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:44 crc kubenswrapper[4812]: I0218 17:39:44.391790 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:44 crc kubenswrapper[4812]: I0218 17:39:44.436836 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:46 crc kubenswrapper[4812]: I0218 17:39:46.368191 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6j8x" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="registry-server" containerID="cri-o://c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af" gracePeriod=2 Feb 18 17:39:46 crc kubenswrapper[4812]: I0218 17:39:46.895982 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.051595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities\") pod \"0b09c04a-3625-49a7-9dea-f76f584ed43b\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.051784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcjb\" (UniqueName: \"kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb\") pod \"0b09c04a-3625-49a7-9dea-f76f584ed43b\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.051821 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content\") pod \"0b09c04a-3625-49a7-9dea-f76f584ed43b\" (UID: \"0b09c04a-3625-49a7-9dea-f76f584ed43b\") " Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.052973 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities" (OuterVolumeSpecName: "utilities") pod "0b09c04a-3625-49a7-9dea-f76f584ed43b" (UID: "0b09c04a-3625-49a7-9dea-f76f584ed43b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.057228 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb" (OuterVolumeSpecName: "kube-api-access-9xcjb") pod "0b09c04a-3625-49a7-9dea-f76f584ed43b" (UID: "0b09c04a-3625-49a7-9dea-f76f584ed43b"). InnerVolumeSpecName "kube-api-access-9xcjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.074087 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b09c04a-3625-49a7-9dea-f76f584ed43b" (UID: "0b09c04a-3625-49a7-9dea-f76f584ed43b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.155197 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.155231 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xcjb\" (UniqueName: \"kubernetes.io/projected/0b09c04a-3625-49a7-9dea-f76f584ed43b-kube-api-access-9xcjb\") on node \"crc\" DevicePath \"\"" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.155242 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09c04a-3625-49a7-9dea-f76f584ed43b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.376476 4812 generic.go:334] "Generic (PLEG): container finished" podID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerID="c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af" exitCode=0 Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.376520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerDied","Data":"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af"} Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.376547 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6j8x" event={"ID":"0b09c04a-3625-49a7-9dea-f76f584ed43b","Type":"ContainerDied","Data":"c4f4eadbb1897899bbecb8baccb1e4a8a33cedc0865cf6836b355a8218a4ff12"} Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.376544 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6j8x" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.376562 4812 scope.go:117] "RemoveContainer" containerID="c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.406194 4812 scope.go:117] "RemoveContainer" containerID="6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.413906 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.427778 4812 scope.go:117] "RemoveContainer" containerID="0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.430129 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6j8x"] Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.477779 4812 scope.go:117] "RemoveContainer" containerID="c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af" Feb 18 17:39:47 crc kubenswrapper[4812]: E0218 17:39:47.478344 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af\": container with ID starting with c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af not found: ID does not exist" containerID="c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.478375 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af"} err="failed to get container status \"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af\": rpc error: code = NotFound desc = could not find container \"c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af\": container with ID starting with c3b9ebbfc14bf3a49967762e1135e3b2ec54e8b1991996801836676fed0115af not found: ID does not exist" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.478399 4812 scope.go:117] "RemoveContainer" containerID="6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01" Feb 18 17:39:47 crc kubenswrapper[4812]: E0218 17:39:47.478788 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01\": container with ID starting with 6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01 not found: ID does not exist" containerID="6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.478831 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01"} err="failed to get container status \"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01\": rpc error: code = NotFound desc = could not find container \"6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01\": container with ID starting with 6a31181a3ec9b0ff52f7a4e9c20a53d38ca6f45ed9aca4e58779315d74de5f01 not found: ID does not exist" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.478859 4812 scope.go:117] "RemoveContainer" containerID="0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a" Feb 18 17:39:47 crc kubenswrapper[4812]: E0218 17:39:47.479528 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a\": container with ID starting with 0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a not found: ID does not exist" containerID="0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a" Feb 18 17:39:47 crc kubenswrapper[4812]: I0218 17:39:47.479561 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a"} err="failed to get container status \"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a\": rpc error: code = NotFound desc = could not find container \"0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a\": container with ID starting with 0abd67b2a129524de2e516c3eb13d1a7305d2be871c53a0d5f3e81d00e84527a not found: ID does not exist" Feb 18 17:39:48 crc kubenswrapper[4812]: I0218 17:39:48.520662 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" path="/var/lib/kubelet/pods/0b09c04a-3625-49a7-9dea-f76f584ed43b/volumes" Feb 18 17:40:33 crc kubenswrapper[4812]: I0218 17:40:33.413359 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:40:33 crc kubenswrapper[4812]: I0218 17:40:33.415472 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:41:03 crc kubenswrapper[4812]: I0218 17:41:03.426670 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:41:03 crc kubenswrapper[4812]: I0218 17:41:03.427357 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.413666 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.414151 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.414193 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.415239 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.415284 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" gracePeriod=600 Feb 18 17:41:33 crc kubenswrapper[4812]: E0218 17:41:33.534003 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.739661 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" exitCode=0 Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.739709 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874"} Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.740078 4812 scope.go:117] "RemoveContainer" containerID="cb61521a1daf0ac9ab2762bf62e46eec5873ff4d2cec0182f5639d96ddce8407" Feb 18 17:41:33 crc kubenswrapper[4812]: I0218 17:41:33.741271 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:41:33 crc kubenswrapper[4812]: E0218 17:41:33.741807 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:41:48 crc kubenswrapper[4812]: I0218 17:41:48.508839 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:41:48 crc kubenswrapper[4812]: E0218 17:41:48.509604 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:42:02 crc kubenswrapper[4812]: I0218 17:42:02.508711 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:42:02 crc kubenswrapper[4812]: E0218 17:42:02.509695 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:42:15 crc kubenswrapper[4812]: I0218 17:42:15.508313 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:42:15 crc kubenswrapper[4812]: E0218 17:42:15.509058 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:42:28 crc kubenswrapper[4812]: I0218 17:42:28.508115 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:42:28 crc kubenswrapper[4812]: E0218 17:42:28.508851 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:42:43 crc kubenswrapper[4812]: I0218 17:42:43.507639 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:42:43 crc kubenswrapper[4812]: E0218 17:42:43.509760 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:42:54 crc kubenswrapper[4812]: I0218 17:42:54.509515 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:42:54 crc kubenswrapper[4812]: E0218 17:42:54.510375 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:43:08 crc kubenswrapper[4812]: I0218 17:43:08.507903 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:43:08 crc kubenswrapper[4812]: E0218 17:43:08.508894 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:43:16 crc kubenswrapper[4812]: I0218 17:43:16.618639 4812 scope.go:117] "RemoveContainer" containerID="fc50ecf1a17dae1faebe863e3d4273d730fe8e96e85cf066b94d708a32a4035e" Feb 18 17:43:16 crc kubenswrapper[4812]: I0218 17:43:16.648441 4812 scope.go:117] "RemoveContainer" containerID="c8eab6e99536a91cb30d67e78410ca989bb5c76579c63c28eeff091f996d607b" Feb 18 17:43:16 crc kubenswrapper[4812]: I0218 17:43:16.706245 4812 scope.go:117] "RemoveContainer" containerID="3448b637399e5c25ab7706cdfe5ab8bb83548ece924599967364d0647b1bc27c" Feb 18 17:43:19 crc kubenswrapper[4812]: I0218 17:43:19.507653 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:43:19 crc kubenswrapper[4812]: E0218 17:43:19.508303 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:43:32 crc kubenswrapper[4812]: I0218 17:43:32.509249 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:43:32 crc kubenswrapper[4812]: E0218 17:43:32.510209 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:43:47 crc kubenswrapper[4812]: I0218 17:43:47.508566 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:43:47 crc kubenswrapper[4812]: E0218 17:43:47.509460 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:43:59 crc kubenswrapper[4812]: I0218 17:43:59.510168 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:43:59 crc kubenswrapper[4812]: E0218 17:43:59.512068 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:44:14 crc kubenswrapper[4812]: I0218 17:44:14.508803 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:44:14 crc kubenswrapper[4812]: E0218 17:44:14.509876 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:44:28 crc kubenswrapper[4812]: I0218 17:44:28.508736 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:44:28 crc kubenswrapper[4812]: E0218 17:44:28.509974 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:44:42 crc kubenswrapper[4812]: I0218 17:44:42.508519 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:44:42 crc kubenswrapper[4812]: E0218 17:44:42.509248 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:44:53 crc kubenswrapper[4812]: I0218 17:44:53.508187 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:44:53 crc kubenswrapper[4812]: E0218 17:44:53.509193 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.161662 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg"] Feb 18 17:45:00 crc kubenswrapper[4812]: E0218 17:45:00.162715 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="extract-utilities" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.162730 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="extract-utilities" Feb 18 17:45:00 crc kubenswrapper[4812]: E0218 17:45:00.162744 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="extract-content" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.162752 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="extract-content" Feb 18 17:45:00 crc kubenswrapper[4812]: E0218 17:45:00.162789 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="registry-server" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.162797 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="registry-server" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.163011 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b09c04a-3625-49a7-9dea-f76f584ed43b" containerName="registry-server" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.163721 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.167123 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.168198 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.176580 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg"] Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.303738 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.303787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwbt\" (UniqueName: \"kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.304071 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.406134 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwbt\" (UniqueName: \"kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.406246 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.406400 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.407295 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.415611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.427706 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwbt\" (UniqueName: \"kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt\") pod \"collect-profiles-29523945-5b7xg\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.488547 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.964017 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg"] Feb 18 17:45:00 crc kubenswrapper[4812]: I0218 17:45:00.973987 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" event={"ID":"19caee29-2b50-4ecf-896a-4ae18da50c2f","Type":"ContainerStarted","Data":"08b893e52b345ac81b597cb9881b4fe20e79aaa3d010bfe787c13eda87757f66"} Feb 18 17:45:01 crc kubenswrapper[4812]: I0218 17:45:01.985153 4812 generic.go:334] "Generic (PLEG): container finished" podID="19caee29-2b50-4ecf-896a-4ae18da50c2f" containerID="390584f25e892026c5c90227086bd66bc575f6b3a97c79cf636d98d38bc87746" exitCode=0 Feb 18 17:45:01 crc kubenswrapper[4812]: I0218 17:45:01.985229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" event={"ID":"19caee29-2b50-4ecf-896a-4ae18da50c2f","Type":"ContainerDied","Data":"390584f25e892026c5c90227086bd66bc575f6b3a97c79cf636d98d38bc87746"} Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.351125 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.465404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume\") pod \"19caee29-2b50-4ecf-896a-4ae18da50c2f\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.465994 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume\") pod \"19caee29-2b50-4ecf-896a-4ae18da50c2f\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.466022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfwbt\" (UniqueName: \"kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt\") pod \"19caee29-2b50-4ecf-896a-4ae18da50c2f\" (UID: \"19caee29-2b50-4ecf-896a-4ae18da50c2f\") " Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.466773 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume" (OuterVolumeSpecName: "config-volume") pod "19caee29-2b50-4ecf-896a-4ae18da50c2f" (UID: "19caee29-2b50-4ecf-896a-4ae18da50c2f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.472122 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt" (OuterVolumeSpecName: "kube-api-access-cfwbt") pod "19caee29-2b50-4ecf-896a-4ae18da50c2f" (UID: "19caee29-2b50-4ecf-896a-4ae18da50c2f"). InnerVolumeSpecName "kube-api-access-cfwbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.476031 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19caee29-2b50-4ecf-896a-4ae18da50c2f" (UID: "19caee29-2b50-4ecf-896a-4ae18da50c2f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.568010 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19caee29-2b50-4ecf-896a-4ae18da50c2f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.568037 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfwbt\" (UniqueName: \"kubernetes.io/projected/19caee29-2b50-4ecf-896a-4ae18da50c2f-kube-api-access-cfwbt\") on node \"crc\" DevicePath \"\"" Feb 18 17:45:03 crc kubenswrapper[4812]: I0218 17:45:03.568049 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19caee29-2b50-4ecf-896a-4ae18da50c2f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.004939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" event={"ID":"19caee29-2b50-4ecf-896a-4ae18da50c2f","Type":"ContainerDied","Data":"08b893e52b345ac81b597cb9881b4fe20e79aaa3d010bfe787c13eda87757f66"} Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.005021 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b893e52b345ac81b597cb9881b4fe20e79aaa3d010bfe787c13eda87757f66" Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.004977 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523945-5b7xg" Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.426516 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd"] Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.434388 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523900-sgdfd"] Feb 18 17:45:04 crc kubenswrapper[4812]: I0218 17:45:04.521295 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ae5c37-5264-4f63-94ef-49c90413afdd" path="/var/lib/kubelet/pods/d7ae5c37-5264-4f63-94ef-49c90413afdd/volumes" Feb 18 17:45:07 crc kubenswrapper[4812]: I0218 17:45:07.509213 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:45:07 crc kubenswrapper[4812]: E0218 17:45:07.510218 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:45:16 crc kubenswrapper[4812]: I0218 17:45:16.799767 4812 scope.go:117] "RemoveContainer" containerID="44fe34686de6fd59e43deb5a8cd5bca0eda6efffc7f4e5753a6bc704c5772ac9" Feb 18 17:45:21 crc kubenswrapper[4812]: I0218 17:45:21.508573 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:45:21 crc kubenswrapper[4812]: E0218 17:45:21.509385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:45:33 crc kubenswrapper[4812]: I0218 17:45:33.508421 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:45:33 crc kubenswrapper[4812]: E0218 17:45:33.509489 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:45:46 crc kubenswrapper[4812]: I0218 17:45:46.507787 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:45:46 crc kubenswrapper[4812]: E0218 17:45:46.509625 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:45:58 crc kubenswrapper[4812]: I0218 17:45:58.508792 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:45:58 crc kubenswrapper[4812]: E0218 17:45:58.509714 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:46:13 crc kubenswrapper[4812]: I0218 17:46:13.508258 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:46:13 crc kubenswrapper[4812]: E0218 17:46:13.509362 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:46:24 crc kubenswrapper[4812]: I0218 17:46:24.508955 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:46:24 crc kubenswrapper[4812]: E0218 17:46:24.510267 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:46:37 crc kubenswrapper[4812]: I0218 17:46:37.509249 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:46:37 crc kubenswrapper[4812]: I0218 17:46:37.905734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b"} Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.048910 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:01 crc kubenswrapper[4812]: E0218 17:48:01.050228 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19caee29-2b50-4ecf-896a-4ae18da50c2f" containerName="collect-profiles" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.050244 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19caee29-2b50-4ecf-896a-4ae18da50c2f" containerName="collect-profiles" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.050554 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19caee29-2b50-4ecf-896a-4ae18da50c2f" containerName="collect-profiles" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.052415 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.062958 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.174491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.174836 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zpjm\" (UniqueName: \"kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.174862 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.239959 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.242631 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.262316 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.276940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.277001 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpjm\" (UniqueName: \"kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.277035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.277719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.277963 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.303133 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpjm\" (UniqueName: \"kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm\") pod \"certified-operators-g246s\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.372468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.379372 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.379440 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzlh\" (UniqueName: \"kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.379573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.481465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.481529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlzlh\" (UniqueName: \"kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.481622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.482428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.482891 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.502050 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlzlh\" (UniqueName: \"kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh\") pod \"community-operators-fwzt4\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:01 crc kubenswrapper[4812]: I0218 17:48:01.566717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:02 crc kubenswrapper[4812]: I0218 17:48:02.009771 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:02 crc kubenswrapper[4812]: I0218 17:48:02.173680 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:02 crc kubenswrapper[4812]: W0218 17:48:02.647507 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608b19cd_4907_4860_93ab_6b086ae6928f.slice/crio-40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3 WatchSource:0}: Error finding container 40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3: Status 404 returned error can't find the container with id 40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3 Feb 18 17:48:02 crc kubenswrapper[4812]: I0218 17:48:02.719599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerStarted","Data":"3a60c31ccbfb791d27fb167b4a144b82cbcbca6a7d123d14f5610359e2320d07"} Feb 18 17:48:02 crc kubenswrapper[4812]: I0218 17:48:02.721272 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerStarted","Data":"40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3"} Feb 18 17:48:03 crc kubenswrapper[4812]: I0218 17:48:03.733690 4812 generic.go:334] "Generic (PLEG): container finished" podID="852845dd-0218-4605-a0f9-d822e78a391e" containerID="0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8" exitCode=0 Feb 18 17:48:03 crc kubenswrapper[4812]: I0218 17:48:03.733902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerDied","Data":"0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8"} Feb 18 17:48:03 crc kubenswrapper[4812]: I0218 17:48:03.736634 4812 generic.go:334] "Generic (PLEG): container finished" podID="608b19cd-4907-4860-93ab-6b086ae6928f" containerID="2c56548a4b9cd147f888ea00942336a0ef1e5b3cf5b92169572b169c6ba40cde" exitCode=0 Feb 18 17:48:03 crc kubenswrapper[4812]: I0218 17:48:03.736674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerDied","Data":"2c56548a4b9cd147f888ea00942336a0ef1e5b3cf5b92169572b169c6ba40cde"} Feb 18 17:48:03 crc kubenswrapper[4812]: I0218 17:48:03.739445 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:48:05 crc kubenswrapper[4812]: I0218 17:48:05.756298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerStarted","Data":"503a94b4c825090a8c0e2954158f053c026e1337a6bef38bf0a8dbb24decbd7a"} Feb 18 17:48:05 crc kubenswrapper[4812]: I0218 17:48:05.759902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerStarted","Data":"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f"} Feb 18 17:48:06 crc kubenswrapper[4812]: I0218 17:48:06.772018 4812 generic.go:334] "Generic (PLEG): container finished" podID="852845dd-0218-4605-a0f9-d822e78a391e" containerID="7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f" exitCode=0 Feb 18 17:48:06 crc kubenswrapper[4812]: I0218 17:48:06.773019 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerDied","Data":"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f"} Feb 18 17:48:06 crc kubenswrapper[4812]: I0218 17:48:06.776474 4812 generic.go:334] "Generic (PLEG): container finished" podID="608b19cd-4907-4860-93ab-6b086ae6928f" containerID="503a94b4c825090a8c0e2954158f053c026e1337a6bef38bf0a8dbb24decbd7a" exitCode=0 Feb 18 17:48:06 crc kubenswrapper[4812]: I0218 17:48:06.776517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerDied","Data":"503a94b4c825090a8c0e2954158f053c026e1337a6bef38bf0a8dbb24decbd7a"} Feb 18 17:48:07 crc kubenswrapper[4812]: I0218 17:48:07.787223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerStarted","Data":"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b"} Feb 18 17:48:07 crc kubenswrapper[4812]: I0218 17:48:07.791819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerStarted","Data":"3db547ff6f8de99a4a639364d774e712109d28127678a37d1f4a779e90a97c75"} Feb 18 17:48:07 crc kubenswrapper[4812]: I0218 17:48:07.807052 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g246s" podStartSLOduration=3.363791061 podStartE2EDuration="6.807029553s" podCreationTimestamp="2026-02-18 17:48:01 +0000 UTC" firstStartedPulling="2026-02-18 17:48:03.739161471 +0000 UTC m=+4704.004772380" lastFinishedPulling="2026-02-18 17:48:07.182399953 +0000 UTC m=+4707.448010872" observedRunningTime="2026-02-18 17:48:07.80452407 +0000 UTC m=+4708.070134989" watchObservedRunningTime="2026-02-18 17:48:07.807029553 +0000 UTC m=+4708.072640482" Feb 18 17:48:07 crc kubenswrapper[4812]: I0218 17:48:07.833847 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwzt4" podStartSLOduration=3.347959776 podStartE2EDuration="6.833826973s" podCreationTimestamp="2026-02-18 17:48:01 +0000 UTC" firstStartedPulling="2026-02-18 17:48:03.740930246 +0000 UTC m=+4704.006541165" lastFinishedPulling="2026-02-18 17:48:07.226797423 +0000 UTC m=+4707.492408362" observedRunningTime="2026-02-18 17:48:07.820819558 +0000 UTC m=+4708.086430477" watchObservedRunningTime="2026-02-18 17:48:07.833826973 +0000 UTC m=+4708.099437882" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.372658 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.373685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.421545 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.567878 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.567939 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:11 crc kubenswrapper[4812]: I0218 17:48:11.627951 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:21 crc kubenswrapper[4812]: I0218 17:48:21.421330 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:21 crc kubenswrapper[4812]: I0218 17:48:21.481878 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:21 crc kubenswrapper[4812]: I0218 17:48:21.615490 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:21 crc kubenswrapper[4812]: I0218 17:48:21.915660 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g246s" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="registry-server" containerID="cri-o://e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b" gracePeriod=2 Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.466257 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.518215 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities\") pod \"852845dd-0218-4605-a0f9-d822e78a391e\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.518323 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content\") pod \"852845dd-0218-4605-a0f9-d822e78a391e\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.518389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zpjm\" (UniqueName: \"kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm\") pod \"852845dd-0218-4605-a0f9-d822e78a391e\" (UID: \"852845dd-0218-4605-a0f9-d822e78a391e\") " Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.519200 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities" (OuterVolumeSpecName: "utilities") pod "852845dd-0218-4605-a0f9-d822e78a391e" (UID: "852845dd-0218-4605-a0f9-d822e78a391e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.525370 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm" (OuterVolumeSpecName: "kube-api-access-2zpjm") pod "852845dd-0218-4605-a0f9-d822e78a391e" (UID: "852845dd-0218-4605-a0f9-d822e78a391e"). InnerVolumeSpecName "kube-api-access-2zpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.568242 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "852845dd-0218-4605-a0f9-d822e78a391e" (UID: "852845dd-0218-4605-a0f9-d822e78a391e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.620591 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.620624 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852845dd-0218-4605-a0f9-d822e78a391e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.620637 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zpjm\" (UniqueName: \"kubernetes.io/projected/852845dd-0218-4605-a0f9-d822e78a391e-kube-api-access-2zpjm\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.929303 4812 generic.go:334] "Generic (PLEG): container finished" podID="852845dd-0218-4605-a0f9-d822e78a391e" containerID="e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b" exitCode=0 Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.929355 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerDied","Data":"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b"} Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.929429 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g246s" event={"ID":"852845dd-0218-4605-a0f9-d822e78a391e","Type":"ContainerDied","Data":"3a60c31ccbfb791d27fb167b4a144b82cbcbca6a7d123d14f5610359e2320d07"} Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.929460 4812 scope.go:117] "RemoveContainer" containerID="e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.929970 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g246s" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.955976 4812 scope.go:117] "RemoveContainer" containerID="7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f" Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.973400 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.981562 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g246s"] Feb 18 17:48:22 crc kubenswrapper[4812]: I0218 17:48:22.992253 4812 scope.go:117] "RemoveContainer" containerID="0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.024968 4812 scope.go:117] "RemoveContainer" containerID="e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b" Feb 18 17:48:23 crc kubenswrapper[4812]: E0218 17:48:23.025449 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b\": container with ID starting with e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b not found: ID does not exist" containerID="e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.025487 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b"} err="failed to get container status \"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b\": rpc error: code = NotFound desc = could not find container \"e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b\": container with ID starting with e4331380cebca275c88052f79cf706415f843effed6c908d0137538053fadd3b not found: ID does not exist" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.025520 4812 scope.go:117] "RemoveContainer" containerID="7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f" Feb 18 17:48:23 crc kubenswrapper[4812]: E0218 17:48:23.026040 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f\": container with ID starting with 7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f not found: ID does not exist" containerID="7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.026076 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f"} err="failed to get container status \"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f\": rpc error: code = NotFound desc = could not find container \"7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f\": container with ID starting with 7f6ca9c6b07ac94ea7ba2004252850952589e7cfa68f8295db71a05fa4cbae6f not found: ID does not exist" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.026119 4812 scope.go:117] "RemoveContainer" containerID="0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8" Feb 18 17:48:23 crc kubenswrapper[4812]: E0218 17:48:23.026754 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8\": container with ID starting with 0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8 not found: ID does not exist" containerID="0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.026829 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8"} err="failed to get container status \"0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8\": rpc error: code = NotFound desc = could not find container \"0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8\": container with ID starting with 0f94169c372c2be7eb166931ca64848fc79543aba2db35541dd5c6404975c7a8 not found: ID does not exist" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.471889 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.472463 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwzt4" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="registry-server" containerID="cri-o://3db547ff6f8de99a4a639364d774e712109d28127678a37d1f4a779e90a97c75" gracePeriod=2 Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.940314 4812 generic.go:334] "Generic (PLEG): container finished" podID="608b19cd-4907-4860-93ab-6b086ae6928f" containerID="3db547ff6f8de99a4a639364d774e712109d28127678a37d1f4a779e90a97c75" exitCode=0 Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.940378 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerDied","Data":"3db547ff6f8de99a4a639364d774e712109d28127678a37d1f4a779e90a97c75"} Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.940445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwzt4" event={"ID":"608b19cd-4907-4860-93ab-6b086ae6928f","Type":"ContainerDied","Data":"40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3"} Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.940460 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40675a8876db81f9dd22cd242d0d35383c735ac0e326c4de335d1bdb0f9f2af3" Feb 18 17:48:23 crc kubenswrapper[4812]: I0218 17:48:23.962522 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.049400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities\") pod \"608b19cd-4907-4860-93ab-6b086ae6928f\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.049506 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content\") pod \"608b19cd-4907-4860-93ab-6b086ae6928f\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.049660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlzlh\" (UniqueName: \"kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh\") pod \"608b19cd-4907-4860-93ab-6b086ae6928f\" (UID: \"608b19cd-4907-4860-93ab-6b086ae6928f\") " Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.050808 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities" (OuterVolumeSpecName: "utilities") pod "608b19cd-4907-4860-93ab-6b086ae6928f" (UID: "608b19cd-4907-4860-93ab-6b086ae6928f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.059136 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh" (OuterVolumeSpecName: "kube-api-access-tlzlh") pod "608b19cd-4907-4860-93ab-6b086ae6928f" (UID: "608b19cd-4907-4860-93ab-6b086ae6928f"). InnerVolumeSpecName "kube-api-access-tlzlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.092650 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "608b19cd-4907-4860-93ab-6b086ae6928f" (UID: "608b19cd-4907-4860-93ab-6b086ae6928f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.152320 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlzlh\" (UniqueName: \"kubernetes.io/projected/608b19cd-4907-4860-93ab-6b086ae6928f-kube-api-access-tlzlh\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.152680 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.152692 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608b19cd-4907-4860-93ab-6b086ae6928f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.522008 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852845dd-0218-4605-a0f9-d822e78a391e" path="/var/lib/kubelet/pods/852845dd-0218-4605-a0f9-d822e78a391e/volumes" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.950170 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwzt4" Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.986136 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:24 crc kubenswrapper[4812]: I0218 17:48:24.996928 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwzt4"] Feb 18 17:48:26 crc kubenswrapper[4812]: I0218 17:48:26.529160 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" path="/var/lib/kubelet/pods/608b19cd-4907-4860-93ab-6b086ae6928f/volumes" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.630133 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631495 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="extract-content" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631514 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="extract-content" Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631538 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631546 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631560 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="extract-utilities" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631569 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="extract-utilities" Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631583 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="extract-content" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631590 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="extract-content" Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631625 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631632 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: E0218 17:48:54.631645 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="extract-utilities" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631653 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="extract-utilities" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631924 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b19cd-4907-4860-93ab-6b086ae6928f" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.631956 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="852845dd-0218-4605-a0f9-d822e78a391e" containerName="registry-server" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.633949 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.653872 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.701843 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfd4p\" (UniqueName: \"kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.701906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.701987 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.802823 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfd4p\" (UniqueName: \"kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.802873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.802926 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.803585 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.803685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.824944 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfd4p\" (UniqueName: \"kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p\") pod \"redhat-operators-cm2b9\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:54 crc kubenswrapper[4812]: I0218 17:48:54.960057 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:48:55 crc kubenswrapper[4812]: I0218 17:48:55.418515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:48:56 crc kubenswrapper[4812]: I0218 17:48:56.276706 4812 generic.go:334] "Generic (PLEG): container finished" podID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerID="c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028" exitCode=0 Feb 18 17:48:56 crc kubenswrapper[4812]: I0218 17:48:56.276766 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerDied","Data":"c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028"} Feb 18 17:48:56 crc kubenswrapper[4812]: I0218 17:48:56.276995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerStarted","Data":"21a75cf43cff5364225df0ddec7910ba15f162f581a822655a04031938612167"} Feb 18 17:48:58 crc kubenswrapper[4812]: I0218 17:48:58.297054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerStarted","Data":"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245"} Feb 18 17:49:01 crc kubenswrapper[4812]: I0218 17:49:01.332615 4812 generic.go:334] "Generic (PLEG): container finished" podID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerID="1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245" exitCode=0 Feb 18 17:49:01 crc kubenswrapper[4812]: I0218 17:49:01.332703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerDied","Data":"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245"} Feb 18 17:49:02 crc kubenswrapper[4812]: I0218 17:49:02.349025 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerStarted","Data":"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1"} Feb 18 17:49:03 crc kubenswrapper[4812]: I0218 17:49:03.414322 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:49:03 crc kubenswrapper[4812]: I0218 17:49:03.414377 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:49:04 crc kubenswrapper[4812]: I0218 17:49:04.960578 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:04 crc kubenswrapper[4812]: I0218 17:49:04.960920 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:06 crc kubenswrapper[4812]: I0218 17:49:06.018243 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cm2b9" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="registry-server" probeResult="failure" output=< Feb 18 17:49:06 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:49:06 crc kubenswrapper[4812]: > Feb 18 17:49:15 crc kubenswrapper[4812]: I0218 17:49:15.035155 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:15 crc kubenswrapper[4812]: I0218 17:49:15.061686 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cm2b9" podStartSLOduration=15.389218379 podStartE2EDuration="21.061666445s" podCreationTimestamp="2026-02-18 17:48:54 +0000 UTC" firstStartedPulling="2026-02-18 17:48:56.281393174 +0000 UTC m=+4756.547004103" lastFinishedPulling="2026-02-18 17:49:01.95384126 +0000 UTC m=+4762.219452169" observedRunningTime="2026-02-18 17:49:02.380792226 +0000 UTC m=+4762.646403145" watchObservedRunningTime="2026-02-18 17:49:15.061666445 +0000 UTC m=+4775.327277364" Feb 18 17:49:15 crc kubenswrapper[4812]: I0218 17:49:15.105023 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:15 crc kubenswrapper[4812]: I0218 17:49:15.273277 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:49:16 crc kubenswrapper[4812]: I0218 17:49:16.523917 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cm2b9" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="registry-server" containerID="cri-o://ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1" gracePeriod=2 Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.164301 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.255718 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities\") pod \"e767d640-5c17-4b27-95e1-ba8731b469b7\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.255783 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content\") pod \"e767d640-5c17-4b27-95e1-ba8731b469b7\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.255838 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfd4p\" (UniqueName: \"kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p\") pod \"e767d640-5c17-4b27-95e1-ba8731b469b7\" (UID: \"e767d640-5c17-4b27-95e1-ba8731b469b7\") " Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.257379 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities" (OuterVolumeSpecName: "utilities") pod "e767d640-5c17-4b27-95e1-ba8731b469b7" (UID: "e767d640-5c17-4b27-95e1-ba8731b469b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.261830 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p" (OuterVolumeSpecName: "kube-api-access-qfd4p") pod "e767d640-5c17-4b27-95e1-ba8731b469b7" (UID: "e767d640-5c17-4b27-95e1-ba8731b469b7"). InnerVolumeSpecName "kube-api-access-qfd4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.358441 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.358473 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfd4p\" (UniqueName: \"kubernetes.io/projected/e767d640-5c17-4b27-95e1-ba8731b469b7-kube-api-access-qfd4p\") on node \"crc\" DevicePath \"\"" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.378358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e767d640-5c17-4b27-95e1-ba8731b469b7" (UID: "e767d640-5c17-4b27-95e1-ba8731b469b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.460078 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767d640-5c17-4b27-95e1-ba8731b469b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.538209 4812 generic.go:334] "Generic (PLEG): container finished" podID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerID="ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1" exitCode=0 Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.538257 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerDied","Data":"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1"} Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.538288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm2b9" event={"ID":"e767d640-5c17-4b27-95e1-ba8731b469b7","Type":"ContainerDied","Data":"21a75cf43cff5364225df0ddec7910ba15f162f581a822655a04031938612167"} Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.538295 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm2b9" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.538310 4812 scope.go:117] "RemoveContainer" containerID="ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.559219 4812 scope.go:117] "RemoveContainer" containerID="1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.577825 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.594310 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cm2b9"] Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.596788 4812 scope.go:117] "RemoveContainer" containerID="c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.633676 4812 scope.go:117] "RemoveContainer" containerID="ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1" Feb 18 17:49:17 crc kubenswrapper[4812]: E0218 17:49:17.634050 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1\": container with ID starting with ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1 not found: ID does not exist" containerID="ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.634225 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1"} err="failed to get container status \"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1\": rpc error: code = NotFound desc = could not find container \"ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1\": container with ID starting with ce97601e55015bf3f67e24588476b905d16fec613560eec5912faa9b7fafd7a1 not found: ID does not exist" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.634323 4812 scope.go:117] "RemoveContainer" containerID="1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245" Feb 18 17:49:17 crc kubenswrapper[4812]: E0218 17:49:17.634775 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245\": container with ID starting with 1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245 not found: ID does not exist" containerID="1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.634800 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245"} err="failed to get container status \"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245\": rpc error: code = NotFound desc = could not find container \"1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245\": container with ID starting with 1319ebdd81d2f8c5562b7ebe2ce008d9a0d105438d6d33bc0dee60a35b56c245 not found: ID does not exist" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.634815 4812 scope.go:117] "RemoveContainer" containerID="c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028" Feb 18 17:49:17 crc kubenswrapper[4812]: E0218 17:49:17.635059 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028\": container with ID starting with c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028 not found: ID does not exist" containerID="c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028" Feb 18 17:49:17 crc kubenswrapper[4812]: I0218 17:49:17.635114 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028"} err="failed to get container status \"c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028\": rpc error: code = NotFound desc = could not find container \"c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028\": container with ID starting with c3af01a7ae30d238dcdf9e7dbda6a143f3de415a057d6ec46d8fc7bcd87ab028 not found: ID does not exist" Feb 18 17:49:18 crc kubenswrapper[4812]: I0218 17:49:18.522962 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" path="/var/lib/kubelet/pods/e767d640-5c17-4b27-95e1-ba8731b469b7/volumes" Feb 18 17:49:33 crc kubenswrapper[4812]: I0218 17:49:33.414702 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:49:33 crc kubenswrapper[4812]: I0218 17:49:33.415321 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:50:03 crc kubenswrapper[4812]: I0218 17:50:03.414029 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:50:03 crc kubenswrapper[4812]: I0218 17:50:03.414954 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:50:03 crc kubenswrapper[4812]: I0218 17:50:03.415154 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:50:03 crc kubenswrapper[4812]: I0218 17:50:03.416224 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:50:03 crc kubenswrapper[4812]: I0218 17:50:03.416360 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b" gracePeriod=600 Feb 18 17:50:04 crc kubenswrapper[4812]: I0218 17:50:04.020819 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b" exitCode=0 Feb 18 17:50:04 crc kubenswrapper[4812]: I0218 17:50:04.020923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b"} Feb 18 17:50:04 crc kubenswrapper[4812]: I0218 17:50:04.021129 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177"} Feb 18 17:50:04 crc kubenswrapper[4812]: I0218 17:50:04.021157 4812 scope.go:117] "RemoveContainer" containerID="7ecc85aa03614add5935b005a16e7a4a6b0b49ef8b86f86f5e00345638e25874" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.963720 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:17 crc kubenswrapper[4812]: E0218 17:50:17.964606 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="extract-utilities" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.964620 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="extract-utilities" Feb 18 17:50:17 crc kubenswrapper[4812]: E0218 17:50:17.964638 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="registry-server" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.964645 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="registry-server" Feb 18 17:50:17 crc kubenswrapper[4812]: E0218 17:50:17.964667 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="extract-content" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.964674 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="extract-content" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.964834 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e767d640-5c17-4b27-95e1-ba8731b469b7" containerName="registry-server" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.966263 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:17 crc kubenswrapper[4812]: I0218 17:50:17.990234 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.019123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.019435 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.019627 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlklf\" (UniqueName: \"kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.121991 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.122655 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.122124 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlklf\" (UniqueName: \"kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.122939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.123290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.150335 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlklf\" (UniqueName: \"kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf\") pod \"redhat-marketplace-mhsdm\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.301470 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:18 crc kubenswrapper[4812]: I0218 17:50:18.784791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:19 crc kubenswrapper[4812]: I0218 17:50:19.156856 4812 generic.go:334] "Generic (PLEG): container finished" podID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerID="1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e" exitCode=0 Feb 18 17:50:19 crc kubenswrapper[4812]: I0218 17:50:19.156897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerDied","Data":"1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e"} Feb 18 17:50:19 crc kubenswrapper[4812]: I0218 17:50:19.156922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerStarted","Data":"00099d7b3a88e75c4dab4744a3960214ca9bbc055bf3f04e353f061b15d96f92"} Feb 18 17:50:20 crc kubenswrapper[4812]: I0218 17:50:20.167921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerStarted","Data":"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed"} Feb 18 17:50:21 crc kubenswrapper[4812]: I0218 17:50:21.180847 4812 generic.go:334] "Generic (PLEG): container finished" podID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerID="7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed" exitCode=0 Feb 18 17:50:21 crc kubenswrapper[4812]: I0218 17:50:21.180899 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerDied","Data":"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed"} Feb 18 17:50:22 crc kubenswrapper[4812]: I0218 17:50:22.192168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerStarted","Data":"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9"} Feb 18 17:50:22 crc kubenswrapper[4812]: I0218 17:50:22.210687 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mhsdm" podStartSLOduration=2.749853292 podStartE2EDuration="5.210671487s" podCreationTimestamp="2026-02-18 17:50:17 +0000 UTC" firstStartedPulling="2026-02-18 17:50:19.159829488 +0000 UTC m=+4839.425440397" lastFinishedPulling="2026-02-18 17:50:21.620647683 +0000 UTC m=+4841.886258592" observedRunningTime="2026-02-18 17:50:22.207939999 +0000 UTC m=+4842.473550928" watchObservedRunningTime="2026-02-18 17:50:22.210671487 +0000 UTC m=+4842.476282396" Feb 18 17:50:28 crc kubenswrapper[4812]: I0218 17:50:28.301554 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:28 crc kubenswrapper[4812]: I0218 17:50:28.302773 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:28 crc kubenswrapper[4812]: I0218 17:50:28.343343 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:29 crc kubenswrapper[4812]: I0218 17:50:29.316581 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:29 crc kubenswrapper[4812]: I0218 17:50:29.373365 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.270020 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mhsdm" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="registry-server" containerID="cri-o://b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9" gracePeriod=2 Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.750810 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.891081 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlklf\" (UniqueName: \"kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf\") pod \"042a5279-ebe3-4e72-8254-c8850a9947a2\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.891758 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content\") pod \"042a5279-ebe3-4e72-8254-c8850a9947a2\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.891839 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities\") pod \"042a5279-ebe3-4e72-8254-c8850a9947a2\" (UID: \"042a5279-ebe3-4e72-8254-c8850a9947a2\") " Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.892888 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities" (OuterVolumeSpecName: "utilities") pod "042a5279-ebe3-4e72-8254-c8850a9947a2" (UID: "042a5279-ebe3-4e72-8254-c8850a9947a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.898786 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf" (OuterVolumeSpecName: "kube-api-access-nlklf") pod "042a5279-ebe3-4e72-8254-c8850a9947a2" (UID: "042a5279-ebe3-4e72-8254-c8850a9947a2"). InnerVolumeSpecName "kube-api-access-nlklf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.923582 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "042a5279-ebe3-4e72-8254-c8850a9947a2" (UID: "042a5279-ebe3-4e72-8254-c8850a9947a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.993937 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.993973 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/042a5279-ebe3-4e72-8254-c8850a9947a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:50:31 crc kubenswrapper[4812]: I0218 17:50:31.993985 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlklf\" (UniqueName: \"kubernetes.io/projected/042a5279-ebe3-4e72-8254-c8850a9947a2-kube-api-access-nlklf\") on node \"crc\" DevicePath \"\"" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.280076 4812 generic.go:334] "Generic (PLEG): container finished" podID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerID="b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9" exitCode=0 Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.280153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerDied","Data":"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9"} Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.280158 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mhsdm" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.280192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mhsdm" event={"ID":"042a5279-ebe3-4e72-8254-c8850a9947a2","Type":"ContainerDied","Data":"00099d7b3a88e75c4dab4744a3960214ca9bbc055bf3f04e353f061b15d96f92"} Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.280226 4812 scope.go:117] "RemoveContainer" containerID="b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.303937 4812 scope.go:117] "RemoveContainer" containerID="7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.326427 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.330969 4812 scope.go:117] "RemoveContainer" containerID="1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.337635 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mhsdm"] Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.521630 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" path="/var/lib/kubelet/pods/042a5279-ebe3-4e72-8254-c8850a9947a2/volumes" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.854325 4812 scope.go:117] "RemoveContainer" containerID="b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9" Feb 18 17:50:32 crc kubenswrapper[4812]: E0218 17:50:32.855290 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9\": container with ID starting with b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9 not found: ID does not exist" containerID="b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.855333 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9"} err="failed to get container status \"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9\": rpc error: code = NotFound desc = could not find container \"b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9\": container with ID starting with b3ecd53ffe80a1fb4663e3a0cb8130b1ab5ed47fa6a3f6249d6d51ac433c89e9 not found: ID does not exist" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.855363 4812 scope.go:117] "RemoveContainer" containerID="7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed" Feb 18 17:50:32 crc kubenswrapper[4812]: E0218 17:50:32.855790 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed\": container with ID starting with 7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed not found: ID does not exist" containerID="7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.855827 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed"} err="failed to get container status \"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed\": rpc error: code = NotFound desc = could not find container \"7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed\": container with ID starting with 7c9fe72205148a4a0079514c8a08c01f581133d2c495e125451037366e1e06ed not found: ID does not exist" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.855852 4812 scope.go:117] "RemoveContainer" containerID="1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e" Feb 18 17:50:32 crc kubenswrapper[4812]: E0218 17:50:32.856203 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e\": container with ID starting with 1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e not found: ID does not exist" containerID="1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e" Feb 18 17:50:32 crc kubenswrapper[4812]: I0218 17:50:32.856255 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e"} err="failed to get container status \"1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e\": rpc error: code = NotFound desc = could not find container \"1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e\": container with ID starting with 1cad1d42718723b3df7fd14611c5cd524e9e7dd26404a445923814a600890a3e not found: ID does not exist" Feb 18 17:52:03 crc kubenswrapper[4812]: I0218 17:52:03.414136 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:52:03 crc kubenswrapper[4812]: I0218 17:52:03.414611 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:52:33 crc kubenswrapper[4812]: I0218 17:52:33.413479 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:52:33 crc kubenswrapper[4812]: I0218 17:52:33.413879 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.413638 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.414529 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.414657 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.415725 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.415826 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" gracePeriod=600 Feb 18 17:53:03 crc kubenswrapper[4812]: E0218 17:53:03.552383 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.682963 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" exitCode=0 Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.683064 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177"} Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.683208 4812 scope.go:117] "RemoveContainer" containerID="6534edefe322a03d16f3bed9a2b7d7ea21fefc307444a1843096c016c67a303b" Feb 18 17:53:03 crc kubenswrapper[4812]: I0218 17:53:03.684710 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:53:03 crc kubenswrapper[4812]: E0218 17:53:03.685427 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:53:17 crc kubenswrapper[4812]: I0218 17:53:17.508575 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:53:17 crc kubenswrapper[4812]: E0218 17:53:17.509366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:53:31 crc kubenswrapper[4812]: I0218 17:53:31.508327 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:53:31 crc kubenswrapper[4812]: E0218 17:53:31.509229 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:53:43 crc kubenswrapper[4812]: I0218 17:53:43.508583 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:53:43 crc kubenswrapper[4812]: E0218 17:53:43.509623 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:53:57 crc kubenswrapper[4812]: I0218 17:53:57.508955 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:53:57 crc kubenswrapper[4812]: E0218 17:53:57.509789 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:54:08 crc kubenswrapper[4812]: I0218 17:54:08.507777 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:54:08 crc kubenswrapper[4812]: E0218 17:54:08.509385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:54:17 crc kubenswrapper[4812]: I0218 17:54:17.035876 4812 scope.go:117] "RemoveContainer" containerID="3db547ff6f8de99a4a639364d774e712109d28127678a37d1f4a779e90a97c75" Feb 18 17:54:17 crc kubenswrapper[4812]: I0218 17:54:17.068084 4812 scope.go:117] "RemoveContainer" containerID="503a94b4c825090a8c0e2954158f053c026e1337a6bef38bf0a8dbb24decbd7a" Feb 18 17:54:17 crc kubenswrapper[4812]: I0218 17:54:17.101603 4812 scope.go:117] "RemoveContainer" containerID="2c56548a4b9cd147f888ea00942336a0ef1e5b3cf5b92169572b169c6ba40cde" Feb 18 17:54:21 crc kubenswrapper[4812]: I0218 17:54:21.507944 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:54:21 crc kubenswrapper[4812]: E0218 17:54:21.508524 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:54:33 crc kubenswrapper[4812]: I0218 17:54:33.508710 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:54:33 crc kubenswrapper[4812]: E0218 17:54:33.509382 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:54:46 crc kubenswrapper[4812]: I0218 17:54:46.509431 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:54:46 crc kubenswrapper[4812]: E0218 17:54:46.510837 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:54:57 crc kubenswrapper[4812]: I0218 17:54:57.507968 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:54:57 crc kubenswrapper[4812]: E0218 17:54:57.508762 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:55:08 crc kubenswrapper[4812]: I0218 17:55:08.508244 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:55:08 crc kubenswrapper[4812]: E0218 17:55:08.508978 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:55:23 crc kubenswrapper[4812]: I0218 17:55:23.509462 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:55:23 crc kubenswrapper[4812]: E0218 17:55:23.510649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:55:36 crc kubenswrapper[4812]: I0218 17:55:36.508257 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:55:36 crc kubenswrapper[4812]: E0218 17:55:36.509130 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:55:47 crc kubenswrapper[4812]: I0218 17:55:47.507668 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:55:47 crc kubenswrapper[4812]: E0218 17:55:47.508471 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:55:59 crc kubenswrapper[4812]: I0218 17:55:59.508786 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:55:59 crc kubenswrapper[4812]: E0218 17:55:59.510896 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:56:12 crc kubenswrapper[4812]: I0218 17:56:12.508143 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:56:12 crc kubenswrapper[4812]: E0218 17:56:12.508903 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:56:15 crc kubenswrapper[4812]: I0218 17:56:15.660495 4812 generic.go:334] "Generic (PLEG): container finished" podID="d55cc8b7-fd00-4b48-ae2c-458f83580502" containerID="396ab037eee11369c699fc2b8728410e2f423bd1cbe72a74f99caaeed28aeee7" exitCode=1 Feb 18 17:56:15 crc kubenswrapper[4812]: I0218 17:56:15.660594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d55cc8b7-fd00-4b48-ae2c-458f83580502","Type":"ContainerDied","Data":"396ab037eee11369c699fc2b8728410e2f423bd1cbe72a74f99caaeed28aeee7"} Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.048879 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.205858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.205952 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.205979 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206323 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206357 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pslpt\" (UniqueName: \"kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.206430 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config\") pod \"d55cc8b7-fd00-4b48-ae2c-458f83580502\" (UID: \"d55cc8b7-fd00-4b48-ae2c-458f83580502\") " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.207458 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data" (OuterVolumeSpecName: "config-data") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.214875 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.231422 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt" (OuterVolumeSpecName: "kube-api-access-pslpt") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "kube-api-access-pslpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.249751 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.266098 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.273436 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.286745 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.300565 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309208 4812 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309265 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309283 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pslpt\" (UniqueName: \"kubernetes.io/projected/d55cc8b7-fd00-4b48-ae2c-458f83580502-kube-api-access-pslpt\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309297 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309306 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309314 4812 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309322 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d55cc8b7-fd00-4b48-ae2c-458f83580502-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.309329 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d55cc8b7-fd00-4b48-ae2c-458f83580502-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.312949 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d55cc8b7-fd00-4b48-ae2c-458f83580502" (UID: "d55cc8b7-fd00-4b48-ae2c-458f83580502"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.331756 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.410632 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.410665 4812 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d55cc8b7-fd00-4b48-ae2c-458f83580502-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.682283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d55cc8b7-fd00-4b48-ae2c-458f83580502","Type":"ContainerDied","Data":"17a5ddfd21c812a3ea38314caf7d8f4f154420f645987183c6338dba77834796"} Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.682324 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a5ddfd21c812a3ea38314caf7d8f4f154420f645987183c6338dba77834796" Feb 18 17:56:17 crc kubenswrapper[4812]: I0218 17:56:17.682357 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.523172 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 17:56:20 crc kubenswrapper[4812]: E0218 17:56:20.524705 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55cc8b7-fd00-4b48-ae2c-458f83580502" containerName="tempest-tests-tempest-tests-runner" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.524743 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55cc8b7-fd00-4b48-ae2c-458f83580502" containerName="tempest-tests-tempest-tests-runner" Feb 18 17:56:20 crc kubenswrapper[4812]: E0218 17:56:20.524789 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="extract-utilities" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.524813 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="extract-utilities" Feb 18 17:56:20 crc kubenswrapper[4812]: E0218 17:56:20.524843 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="registry-server" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.524853 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="registry-server" Feb 18 17:56:20 crc kubenswrapper[4812]: E0218 17:56:20.524865 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="extract-content" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.524872 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="extract-content" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.525120 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="042a5279-ebe3-4e72-8254-c8850a9947a2" containerName="registry-server" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.525141 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55cc8b7-fd00-4b48-ae2c-458f83580502" containerName="tempest-tests-tempest-tests-runner" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.525928 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.526032 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.528550 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sdp7q" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.681240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.681400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5frz\" (UniqueName: \"kubernetes.io/projected/3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b-kube-api-access-v5frz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.783968 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5frz\" (UniqueName: \"kubernetes.io/projected/3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b-kube-api-access-v5frz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.784267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.784905 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.813889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5frz\" (UniqueName: \"kubernetes.io/projected/3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b-kube-api-access-v5frz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.830671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:20 crc kubenswrapper[4812]: I0218 17:56:20.850707 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 17:56:21 crc kubenswrapper[4812]: I0218 17:56:21.328589 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 17:56:21 crc kubenswrapper[4812]: W0218 17:56:21.330240 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d19b1b8_9ca5_40f8_93d2_aa4887e9a64b.slice/crio-9cb6f2c4f4a1a57b4bd51260c85c1e1a25550d79bfa33a64336d417a9bf6ee60 WatchSource:0}: Error finding container 9cb6f2c4f4a1a57b4bd51260c85c1e1a25550d79bfa33a64336d417a9bf6ee60: Status 404 returned error can't find the container with id 9cb6f2c4f4a1a57b4bd51260c85c1e1a25550d79bfa33a64336d417a9bf6ee60 Feb 18 17:56:21 crc kubenswrapper[4812]: I0218 17:56:21.333890 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 17:56:21 crc kubenswrapper[4812]: I0218 17:56:21.723121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b","Type":"ContainerStarted","Data":"9cb6f2c4f4a1a57b4bd51260c85c1e1a25550d79bfa33a64336d417a9bf6ee60"} Feb 18 17:56:22 crc kubenswrapper[4812]: I0218 17:56:22.735643 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b","Type":"ContainerStarted","Data":"792cc4b6bc2b153620099ab2974f0fe05637e5cf6d5806e65dc7366a28c04c02"} Feb 18 17:56:22 crc kubenswrapper[4812]: I0218 17:56:22.753798 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.740109607 podStartE2EDuration="2.753768035s" podCreationTimestamp="2026-02-18 17:56:20 +0000 UTC" firstStartedPulling="2026-02-18 17:56:21.333355807 +0000 UTC m=+5201.598966756" lastFinishedPulling="2026-02-18 17:56:22.347014275 +0000 UTC m=+5202.612625184" observedRunningTime="2026-02-18 17:56:22.753762525 +0000 UTC m=+5203.019373444" watchObservedRunningTime="2026-02-18 17:56:22.753768035 +0000 UTC m=+5203.019378974" Feb 18 17:56:26 crc kubenswrapper[4812]: I0218 17:56:26.509261 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:56:26 crc kubenswrapper[4812]: E0218 17:56:26.510546 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:56:37 crc kubenswrapper[4812]: I0218 17:56:37.508314 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:56:37 crc kubenswrapper[4812]: E0218 17:56:37.509081 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:56:48 crc kubenswrapper[4812]: I0218 17:56:48.509420 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:56:48 crc kubenswrapper[4812]: E0218 17:56:48.510844 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.733852 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q556p/must-gather-mfh22"] Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.736032 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.737895 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q556p"/"openshift-service-ca.crt" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.738492 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q556p"/"kube-root-ca.crt" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.738245 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-q556p"/"default-dockercfg-xc5s2" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.750493 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q556p/must-gather-mfh22"] Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.857085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.857309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5q4m\" (UniqueName: \"kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.959695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5q4m\" (UniqueName: \"kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.959790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.960201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:00 crc kubenswrapper[4812]: I0218 17:57:00.979024 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5q4m\" (UniqueName: \"kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m\") pod \"must-gather-mfh22\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:01 crc kubenswrapper[4812]: I0218 17:57:01.059609 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 17:57:01 crc kubenswrapper[4812]: I0218 17:57:01.504381 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q556p/must-gather-mfh22"] Feb 18 17:57:01 crc kubenswrapper[4812]: I0218 17:57:01.510803 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:57:01 crc kubenswrapper[4812]: E0218 17:57:01.511162 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:57:02 crc kubenswrapper[4812]: I0218 17:57:02.165263 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/must-gather-mfh22" event={"ID":"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9","Type":"ContainerStarted","Data":"dc9acd14328a8fc0f81a82968511703ac2e4137c408b257d9a9dec450557106f"} Feb 18 17:57:07 crc kubenswrapper[4812]: I0218 17:57:07.216557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/must-gather-mfh22" event={"ID":"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9","Type":"ContainerStarted","Data":"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7"} Feb 18 17:57:08 crc kubenswrapper[4812]: I0218 17:57:08.225206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/must-gather-mfh22" event={"ID":"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9","Type":"ContainerStarted","Data":"a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd"} Feb 18 17:57:08 crc kubenswrapper[4812]: I0218 17:57:08.239937 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-q556p/must-gather-mfh22" podStartSLOduration=2.802645756 podStartE2EDuration="8.239919181s" podCreationTimestamp="2026-02-18 17:57:00 +0000 UTC" firstStartedPulling="2026-02-18 17:57:01.51830339 +0000 UTC m=+5241.783914299" lastFinishedPulling="2026-02-18 17:57:06.955576825 +0000 UTC m=+5247.221187724" observedRunningTime="2026-02-18 17:57:08.237311526 +0000 UTC m=+5248.502922455" watchObservedRunningTime="2026-02-18 17:57:08.239919181 +0000 UTC m=+5248.505530090" Feb 18 17:57:10 crc kubenswrapper[4812]: I0218 17:57:10.931247 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q556p/crc-debug-lcbd7"] Feb 18 17:57:10 crc kubenswrapper[4812]: I0218 17:57:10.933961 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.060191 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnz85\" (UniqueName: \"kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.060406 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.161982 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnz85\" (UniqueName: \"kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.162455 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.162592 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.190817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnz85\" (UniqueName: \"kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85\") pod \"crc-debug-lcbd7\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: I0218 17:57:11.251919 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:57:11 crc kubenswrapper[4812]: W0218 17:57:11.291357 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37231221_d3be_4263_a210_384c6332d9be.slice/crio-bb85c18b5b17bdaa506e01b590dd9c604c69815499ee8a36bfba88ef46d6e1e1 WatchSource:0}: Error finding container bb85c18b5b17bdaa506e01b590dd9c604c69815499ee8a36bfba88ef46d6e1e1: Status 404 returned error can't find the container with id bb85c18b5b17bdaa506e01b590dd9c604c69815499ee8a36bfba88ef46d6e1e1 Feb 18 17:57:12 crc kubenswrapper[4812]: I0218 17:57:12.257599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-lcbd7" event={"ID":"37231221-d3be-4263-a210-384c6332d9be","Type":"ContainerStarted","Data":"bb85c18b5b17bdaa506e01b590dd9c604c69815499ee8a36bfba88ef46d6e1e1"} Feb 18 17:57:14 crc kubenswrapper[4812]: I0218 17:57:14.508208 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:57:14 crc kubenswrapper[4812]: E0218 17:57:14.509210 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:57:22 crc kubenswrapper[4812]: I0218 17:57:22.358348 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-lcbd7" event={"ID":"37231221-d3be-4263-a210-384c6332d9be","Type":"ContainerStarted","Data":"9d2e1d64d0ce6d81f2769a34e1258761448b75e947c409f7217684f62e547773"} Feb 18 17:57:22 crc kubenswrapper[4812]: I0218 17:57:22.374497 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-q556p/crc-debug-lcbd7" podStartSLOduration=2.483106426 podStartE2EDuration="12.37447551s" podCreationTimestamp="2026-02-18 17:57:10 +0000 UTC" firstStartedPulling="2026-02-18 17:57:11.295233892 +0000 UTC m=+5251.560844811" lastFinishedPulling="2026-02-18 17:57:21.186602986 +0000 UTC m=+5261.452213895" observedRunningTime="2026-02-18 17:57:22.371438944 +0000 UTC m=+5262.637049853" watchObservedRunningTime="2026-02-18 17:57:22.37447551 +0000 UTC m=+5262.640086409" Feb 18 17:57:25 crc kubenswrapper[4812]: I0218 17:57:25.508253 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:57:25 crc kubenswrapper[4812]: E0218 17:57:25.508937 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:57:40 crc kubenswrapper[4812]: I0218 17:57:40.512021 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:57:40 crc kubenswrapper[4812]: E0218 17:57:40.512822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:57:52 crc kubenswrapper[4812]: I0218 17:57:52.508342 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:57:52 crc kubenswrapper[4812]: E0218 17:57:52.508959 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 17:58:06 crc kubenswrapper[4812]: I0218 17:58:06.507811 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 17:58:06 crc kubenswrapper[4812]: I0218 17:58:06.750091 4812 generic.go:334] "Generic (PLEG): container finished" podID="37231221-d3be-4263-a210-384c6332d9be" containerID="9d2e1d64d0ce6d81f2769a34e1258761448b75e947c409f7217684f62e547773" exitCode=0 Feb 18 17:58:06 crc kubenswrapper[4812]: I0218 17:58:06.750138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-lcbd7" event={"ID":"37231221-d3be-4263-a210-384c6332d9be","Type":"ContainerDied","Data":"9d2e1d64d0ce6d81f2769a34e1258761448b75e947c409f7217684f62e547773"} Feb 18 17:58:07 crc kubenswrapper[4812]: I0218 17:58:07.760446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521"} Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.260513 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.307481 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q556p/crc-debug-lcbd7"] Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.316945 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q556p/crc-debug-lcbd7"] Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.369283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnz85\" (UniqueName: \"kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85\") pod \"37231221-d3be-4263-a210-384c6332d9be\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.369508 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host\") pod \"37231221-d3be-4263-a210-384c6332d9be\" (UID: \"37231221-d3be-4263-a210-384c6332d9be\") " Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.369605 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host" (OuterVolumeSpecName: "host") pod "37231221-d3be-4263-a210-384c6332d9be" (UID: "37231221-d3be-4263-a210-384c6332d9be"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.370159 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37231221-d3be-4263-a210-384c6332d9be-host\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.377144 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85" (OuterVolumeSpecName: "kube-api-access-jnz85") pod "37231221-d3be-4263-a210-384c6332d9be" (UID: "37231221-d3be-4263-a210-384c6332d9be"). InnerVolumeSpecName "kube-api-access-jnz85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.471669 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnz85\" (UniqueName: \"kubernetes.io/projected/37231221-d3be-4263-a210-384c6332d9be-kube-api-access-jnz85\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.519144 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37231221-d3be-4263-a210-384c6332d9be" path="/var/lib/kubelet/pods/37231221-d3be-4263-a210-384c6332d9be/volumes" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.770609 4812 scope.go:117] "RemoveContainer" containerID="9d2e1d64d0ce6d81f2769a34e1258761448b75e947c409f7217684f62e547773" Feb 18 17:58:08 crc kubenswrapper[4812]: I0218 17:58:08.770638 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-lcbd7" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.530562 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q556p/crc-debug-ptghq"] Feb 18 17:58:09 crc kubenswrapper[4812]: E0218 17:58:09.531247 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37231221-d3be-4263-a210-384c6332d9be" containerName="container-00" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.531268 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="37231221-d3be-4263-a210-384c6332d9be" containerName="container-00" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.531450 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="37231221-d3be-4263-a210-384c6332d9be" containerName="container-00" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.532099 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.587934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.587995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh57p\" (UniqueName: \"kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.690358 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.690481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.690491 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh57p\" (UniqueName: \"kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.710171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh57p\" (UniqueName: \"kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p\") pod \"crc-debug-ptghq\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:09 crc kubenswrapper[4812]: I0218 17:58:09.863084 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:10 crc kubenswrapper[4812]: I0218 17:58:10.831836 4812 generic.go:334] "Generic (PLEG): container finished" podID="cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" containerID="a5af720ae881806c124e6585e8a51a6b8df477d072ed1a535801f85f6434ef06" exitCode=0 Feb 18 17:58:10 crc kubenswrapper[4812]: I0218 17:58:10.832483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-ptghq" event={"ID":"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b","Type":"ContainerDied","Data":"a5af720ae881806c124e6585e8a51a6b8df477d072ed1a535801f85f6434ef06"} Feb 18 17:58:10 crc kubenswrapper[4812]: I0218 17:58:10.832568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-ptghq" event={"ID":"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b","Type":"ContainerStarted","Data":"1cb93e27917d9b514562a9359d2a05a88b2ac73d2f221499892dca93e98575a4"} Feb 18 17:58:11 crc kubenswrapper[4812]: I0218 17:58:11.937422 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.036671 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host\") pod \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.036856 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host" (OuterVolumeSpecName: "host") pod "cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" (UID: "cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.036976 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh57p\" (UniqueName: \"kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p\") pod \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\" (UID: \"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b\") " Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.037357 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-host\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.044296 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p" (OuterVolumeSpecName: "kube-api-access-dh57p") pod "cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" (UID: "cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b"). InnerVolumeSpecName "kube-api-access-dh57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.139246 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh57p\" (UniqueName: \"kubernetes.io/projected/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b-kube-api-access-dh57p\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.851603 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-ptghq" event={"ID":"cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b","Type":"ContainerDied","Data":"1cb93e27917d9b514562a9359d2a05a88b2ac73d2f221499892dca93e98575a4"} Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.851955 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cb93e27917d9b514562a9359d2a05a88b2ac73d2f221499892dca93e98575a4" Feb 18 17:58:12 crc kubenswrapper[4812]: I0218 17:58:12.851848 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-ptghq" Feb 18 17:58:13 crc kubenswrapper[4812]: I0218 17:58:13.227491 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q556p/crc-debug-ptghq"] Feb 18 17:58:13 crc kubenswrapper[4812]: I0218 17:58:13.237995 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q556p/crc-debug-ptghq"] Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.424029 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q556p/crc-debug-zldcz"] Feb 18 17:58:14 crc kubenswrapper[4812]: E0218 17:58:14.424490 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" containerName="container-00" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.424504 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" containerName="container-00" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.424697 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" containerName="container-00" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.425365 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.520778 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b" path="/var/lib/kubelet/pods/cc705ca6-f08e-4596-b3f7-b11b2a8a1d7b/volumes" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.577759 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s957r\" (UniqueName: \"kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.578050 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.680209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s957r\" (UniqueName: \"kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.680316 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.680472 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.699502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s957r\" (UniqueName: \"kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r\") pod \"crc-debug-zldcz\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.747004 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:14 crc kubenswrapper[4812]: W0218 17:58:14.777945 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1d38e1_a064_42f2_aded_9ed28df9d76a.slice/crio-5bc06eb8d758cd6afecfbc69155f0b6adcd70ef7f2cb3314ac0c60c40d9cef8f WatchSource:0}: Error finding container 5bc06eb8d758cd6afecfbc69155f0b6adcd70ef7f2cb3314ac0c60c40d9cef8f: Status 404 returned error can't find the container with id 5bc06eb8d758cd6afecfbc69155f0b6adcd70ef7f2cb3314ac0c60c40d9cef8f Feb 18 17:58:14 crc kubenswrapper[4812]: I0218 17:58:14.869111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-zldcz" event={"ID":"dd1d38e1-a064-42f2-aded-9ed28df9d76a","Type":"ContainerStarted","Data":"5bc06eb8d758cd6afecfbc69155f0b6adcd70ef7f2cb3314ac0c60c40d9cef8f"} Feb 18 17:58:15 crc kubenswrapper[4812]: I0218 17:58:15.878861 4812 generic.go:334] "Generic (PLEG): container finished" podID="dd1d38e1-a064-42f2-aded-9ed28df9d76a" containerID="e588df195758dfe80afcbfba5d241184ae458ecf76cadaee3be1e6854ccefe3c" exitCode=0 Feb 18 17:58:15 crc kubenswrapper[4812]: I0218 17:58:15.878919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/crc-debug-zldcz" event={"ID":"dd1d38e1-a064-42f2-aded-9ed28df9d76a","Type":"ContainerDied","Data":"e588df195758dfe80afcbfba5d241184ae458ecf76cadaee3be1e6854ccefe3c"} Feb 18 17:58:15 crc kubenswrapper[4812]: I0218 17:58:15.923441 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q556p/crc-debug-zldcz"] Feb 18 17:58:15 crc kubenswrapper[4812]: I0218 17:58:15.932093 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q556p/crc-debug-zldcz"] Feb 18 17:58:16 crc kubenswrapper[4812]: I0218 17:58:16.987314 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.123079 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s957r\" (UniqueName: \"kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r\") pod \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.123342 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host\") pod \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\" (UID: \"dd1d38e1-a064-42f2-aded-9ed28df9d76a\") " Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.123455 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host" (OuterVolumeSpecName: "host") pod "dd1d38e1-a064-42f2-aded-9ed28df9d76a" (UID: "dd1d38e1-a064-42f2-aded-9ed28df9d76a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.124024 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd1d38e1-a064-42f2-aded-9ed28df9d76a-host\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.128566 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r" (OuterVolumeSpecName: "kube-api-access-s957r") pod "dd1d38e1-a064-42f2-aded-9ed28df9d76a" (UID: "dd1d38e1-a064-42f2-aded-9ed28df9d76a"). InnerVolumeSpecName "kube-api-access-s957r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.225686 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s957r\" (UniqueName: \"kubernetes.io/projected/dd1d38e1-a064-42f2-aded-9ed28df9d76a-kube-api-access-s957r\") on node \"crc\" DevicePath \"\"" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.908549 4812 scope.go:117] "RemoveContainer" containerID="e588df195758dfe80afcbfba5d241184ae458ecf76cadaee3be1e6854ccefe3c" Feb 18 17:58:17 crc kubenswrapper[4812]: I0218 17:58:17.908606 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/crc-debug-zldcz" Feb 18 17:58:18 crc kubenswrapper[4812]: I0218 17:58:18.524638 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1d38e1-a064-42f2-aded-9ed28df9d76a" path="/var/lib/kubelet/pods/dd1d38e1-a064-42f2-aded-9ed28df9d76a/volumes" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.596715 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56594bb5db-7s9w7_53d8634a-331f-4236-b554-a1a336a4510a/barbican-api/0.log" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.776522 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56594bb5db-7s9w7_53d8634a-331f-4236-b554-a1a336a4510a/barbican-api-log/0.log" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.784636 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54759bc498-rskxp_8fa7f426-d545-42e2-aa86-7b1f3fb6006f/barbican-keystone-listener/0.log" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.834934 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54759bc498-rskxp_8fa7f426-d545-42e2-aa86-7b1f3fb6006f/barbican-keystone-listener-log/0.log" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.984646 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-fc67d6965-tp8p8_d0217426-584c-43a5-8ada-d12d12452f63/barbican-worker/0.log" Feb 18 17:58:43 crc kubenswrapper[4812]: I0218 17:58:43.995452 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-fc67d6965-tp8p8_d0217426-584c-43a5-8ada-d12d12452f63/barbican-worker-log/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.163630 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cll9n_ed69aece-4a9c-4e29-a245-b31c021bbca6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.201943 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f7e31fd2-effd-444c-9363-3f7cef593859/ceilometer-central-agent/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.263596 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f7e31fd2-effd-444c-9363-3f7cef593859/ceilometer-notification-agent/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.351230 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f7e31fd2-effd-444c-9363-3f7cef593859/proxy-httpd/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.388132 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f7e31fd2-effd-444c-9363-3f7cef593859/sg-core/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.509218 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7750602c-99bc-47df-850d-ed581888d80d/cinder-api/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.534321 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7750602c-99bc-47df-850d-ed581888d80d/cinder-api-log/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.670908 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80/cinder-scheduler/0.log" Feb 18 17:58:44 crc kubenswrapper[4812]: I0218 17:58:44.735319 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d9e8e68c-f0e0-4d09-a851-0bff2c7f6f80/probe/0.log" Feb 18 17:58:45 crc kubenswrapper[4812]: I0218 17:58:45.663082 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qvjgf_f98d198a-6397-422d-b0d0-0ec0d74e7f83/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:45 crc kubenswrapper[4812]: I0218 17:58:45.707854 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ldgvk_6be91d27-6c6b-4713-9845-4d582116ff6f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:45 crc kubenswrapper[4812]: I0218 17:58:45.857890 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-lddm7_d87bbc96-67c0-4404-b76a-8613492aec13/init/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.088323 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-lddm7_d87bbc96-67c0-4404-b76a-8613492aec13/init/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.104356 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kzvl8_28ecf721-9079-464c-8eb7-317ade066a09/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.157253 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-lddm7_d87bbc96-67c0-4404-b76a-8613492aec13/dnsmasq-dns/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.484910 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fa1f55d-b076-4277-8ba9-c80b987587fb/glance-log/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.506973 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3fa1f55d-b076-4277-8ba9-c80b987587fb/glance-httpd/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.643522 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cedf67dc-05b4-4294-84d6-19c9c649145c/glance-httpd/0.log" Feb 18 17:58:46 crc kubenswrapper[4812]: I0218 17:58:46.676839 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cedf67dc-05b4-4294-84d6-19c9c649145c/glance-log/0.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.365050 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-dbd455b84-x6fxk_11a958b4-3c26-4d73-acfa-fb3fb4c08cb2/horizon/1.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.382511 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jsszg_86b35ff7-e786-4747-877e-c60c4dd3f626/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.504371 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-dbd455b84-x6fxk_11a958b4-3c26-4d73-acfa-fb3fb4c08cb2/horizon/0.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.631441 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fxpqr_d671777c-dea8-4fb2-b203-40fa52f9b093/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.842477 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523901-8rkjl_8e55d885-ab77-4a7f-a3ea-085212e6fb6c/keystone-cron/0.log" Feb 18 17:58:47 crc kubenswrapper[4812]: I0218 17:58:47.959545 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-dbd455b84-x6fxk_11a958b4-3c26-4d73-acfa-fb3fb4c08cb2/horizon-log/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.080937 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bb35e4e7-97db-42af-b8c5-0b79550306f2/kube-state-metrics/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.125938 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65547bbfff-9ppm5_fa418512-c79e-452a-9791-67dfe6c3d772/keystone-api/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.183123 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tf9ln_d7659da5-6aa3-4372-94fb-12a2a30f7d24/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.532508 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7874df7-hld7g_fe5f38f6-ccbc-4355-b83e-c7b31825654c/neutron-httpd/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.578047 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7874df7-hld7g_fe5f38f6-ccbc-4355-b83e-c7b31825654c/neutron-api/0.log" Feb 18 17:58:48 crc kubenswrapper[4812]: I0218 17:58:48.664217 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-blhr7_d063dbe4-2200-4a71-b1d1-55fa4bc36f63/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.185850 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6f62e31f-2bed-4621-8627-abfb596eaf43/nova-cell0-conductor-conductor/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.431906 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d3f7e00-7173-429d-957b-31388ff870d2/nova-cell1-conductor-conductor/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.697247 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_64753db2-4320-4180-9613-cf76f62101dc/nova-api-log/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.701028 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bc2c8be6-d665-457a-a0ae-0297547d9227/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.976267 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_64753db2-4320-4180-9613-cf76f62101dc/nova-api-api/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.981711 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gr8qk_c89bb32f-2416-4ee5-82b2-d0378c8cd0c0/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:49 crc kubenswrapper[4812]: I0218 17:58:49.996594 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e3402a7-2751-498c-af17-1895ac40880d/nova-metadata-log/0.log" Feb 18 17:58:50 crc kubenswrapper[4812]: I0218 17:58:50.493401 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d1c27f6-1144-40ce-a66c-a2c1fb4aa128/mysql-bootstrap/0.log" Feb 18 17:58:50 crc kubenswrapper[4812]: I0218 17:58:50.583345 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_089fc991-d92f-4f7d-9869-449514917e01/nova-scheduler-scheduler/0.log" Feb 18 17:58:50 crc kubenswrapper[4812]: I0218 17:58:50.734024 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d1c27f6-1144-40ce-a66c-a2c1fb4aa128/galera/0.log" Feb 18 17:58:50 crc kubenswrapper[4812]: I0218 17:58:50.809230 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d1c27f6-1144-40ce-a66c-a2c1fb4aa128/mysql-bootstrap/0.log" Feb 18 17:58:50 crc kubenswrapper[4812]: I0218 17:58:50.964788 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a1c50cf6-624a-4342-bc66-3a0789879e55/mysql-bootstrap/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.092702 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a1c50cf6-624a-4342-bc66-3a0789879e55/galera/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.104641 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a1c50cf6-624a-4342-bc66-3a0789879e55/mysql-bootstrap/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.290308 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_22112483-fba0-45a2-90d1-5f35b199a471/openstackclient/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.401754 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n59pt_9389e6cb-d8cb-4459-b9a0-1c1cb010d6a4/openstack-network-exporter/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.612502 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n9n6z_2a2e707c-718f-4f17-9b77-c883f7e9d9f3/ovn-controller/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.763591 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s46ps_8a06b1c0-26fd-448a-ba31-9b6ff58ebab8/ovsdb-server-init/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.877489 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e3402a7-2751-498c-af17-1895ac40880d/nova-metadata-metadata/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.973556 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s46ps_8a06b1c0-26fd-448a-ba31-9b6ff58ebab8/ovsdb-server-init/0.log" Feb 18 17:58:51 crc kubenswrapper[4812]: I0218 17:58:51.989251 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s46ps_8a06b1c0-26fd-448a-ba31-9b6ff58ebab8/ovsdb-server/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.012967 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s46ps_8a06b1c0-26fd-448a-ba31-9b6ff58ebab8/ovs-vswitchd/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.201707 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-95582_8b2bfdae-9a0f-4740-96f7-f51e1db54c6b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.281158 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_350af9df-062b-44ba-bac2-66417c4dfcef/openstack-network-exporter/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.364864 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_350af9df-062b-44ba-bac2-66417c4dfcef/ovn-northd/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.435793 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b6943b9-4519-4dc3-9be1-96aa9eedcfda/openstack-network-exporter/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.491509 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b6943b9-4519-4dc3-9be1-96aa9eedcfda/ovsdbserver-nb/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.610265 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4c01c8a-b0df-43ab-9097-d619e00981d2/openstack-network-exporter/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.688704 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4c01c8a-b0df-43ab-9097-d619e00981d2/ovsdbserver-sb/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.980254 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fb97c8db4-nflvl_46a98f8d-436c-4726-aa29-f838c4f3d216/placement-api/0.log" Feb 18 17:58:52 crc kubenswrapper[4812]: I0218 17:58:52.981440 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f90b33eb-1f5b-4d69-8b0f-0798ac88e041/init-config-reloader/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.017378 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fb97c8db4-nflvl_46a98f8d-436c-4726-aa29-f838c4f3d216/placement-log/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.155160 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f90b33eb-1f5b-4d69-8b0f-0798ac88e041/prometheus/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.217503 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f90b33eb-1f5b-4d69-8b0f-0798ac88e041/config-reloader/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.228820 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f90b33eb-1f5b-4d69-8b0f-0798ac88e041/init-config-reloader/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.229205 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f90b33eb-1f5b-4d69-8b0f-0798ac88e041/thanos-sidecar/0.log" Feb 18 17:58:53 crc kubenswrapper[4812]: I0218 17:58:53.576864 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6af8f1e1-753d-4010-90a4-8127e39198fa/setup-container/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.109675 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6af8f1e1-753d-4010-90a4-8127e39198fa/setup-container/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.215471 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bcd7726-b623-4b86-b8d9-391eea661d2f/setup-container/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.226457 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6af8f1e1-753d-4010-90a4-8127e39198fa/rabbitmq/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.378532 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bcd7726-b623-4b86-b8d9-391eea661d2f/setup-container/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.466782 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bcd7726-b623-4b86-b8d9-391eea661d2f/rabbitmq/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.503992 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zlnmz_ff12abcc-555a-4a37-8184-8889c7e5bcd9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.714513 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vf596_fb914cca-2704-4009-aa44-dfe3d6c00290/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.754466 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-b84kh_ec2aa7b7-90dd-406e-a503-b12166293cff/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.928553 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-znbmb_65be9e89-0994-447a-a008-f08ad56b0371/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:54 crc kubenswrapper[4812]: I0218 17:58:54.990084 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-56nkv_f60f3d41-33cb-4204-a290-d5bc374f6116/ssh-known-hosts-edpm-deployment/0.log" Feb 18 17:58:55 crc kubenswrapper[4812]: I0218 17:58:55.296301 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-db576bcfc-pcjbk_b814aa4e-5f04-4919-bfb3-153dd88e6ef8/proxy-httpd/0.log" Feb 18 17:58:55 crc kubenswrapper[4812]: I0218 17:58:55.746558 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-db576bcfc-pcjbk_b814aa4e-5f04-4919-bfb3-153dd88e6ef8/proxy-server/0.log" Feb 18 17:58:55 crc kubenswrapper[4812]: I0218 17:58:55.811289 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pfnnt_430cd891-febe-45a3-9d5d-97b3933ab503/swift-ring-rebalance/0.log" Feb 18 17:58:55 crc kubenswrapper[4812]: I0218 17:58:55.923620 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/account-auditor/0.log" Feb 18 17:58:55 crc kubenswrapper[4812]: I0218 17:58:55.997592 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/account-reaper/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.058287 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/account-server/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.082454 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/account-replicator/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.143278 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/container-auditor/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.252086 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/container-replicator/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.299117 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/container-updater/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.335727 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/container-server/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.375021 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/object-auditor/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.475702 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/object-expirer/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.525370 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/object-replicator/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.528960 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/object-server/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.596762 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/object-updater/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.703754 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/rsync/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.730383 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_795346dc-bc66-461a-bb9e-64991ac27a50/swift-recon-cron/0.log" Feb 18 17:58:56 crc kubenswrapper[4812]: I0218 17:58:56.981564 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-w8mv4_7430437b-aab4-42f1-be95-3b98539e570f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:57 crc kubenswrapper[4812]: I0218 17:58:57.249633 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3d19b1b8-9ca5-40f8-93d2-aa4887e9a64b/test-operator-logs-container/0.log" Feb 18 17:58:57 crc kubenswrapper[4812]: I0218 17:58:57.250656 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d55cc8b7-fd00-4b48-ae2c-458f83580502/tempest-tests-tempest-tests-runner/0.log" Feb 18 17:58:57 crc kubenswrapper[4812]: I0218 17:58:57.439490 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xwn5v_ccbed1c0-c019-49d0-9c31-3e16f1254d9b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 17:58:58 crc kubenswrapper[4812]: I0218 17:58:58.083557 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_5f700e3b-d59f-4f8b-8ad0-845f2f5cb651/watcher-applier/0.log" Feb 18 17:58:58 crc kubenswrapper[4812]: I0218 17:58:58.446723 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9512a70d-2793-4aac-bccc-4ed1d50aeb5b/watcher-api-log/0.log" Feb 18 17:58:58 crc kubenswrapper[4812]: I0218 17:58:58.988848 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_1c298149-36ad-42bd-b736-f1fe48687edf/watcher-decision-engine/0.log" Feb 18 17:58:59 crc kubenswrapper[4812]: I0218 17:58:59.651290 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e21653f2-3333-4f74-b1c7-3d34c6ab4280/memcached/0.log" Feb 18 17:59:00 crc kubenswrapper[4812]: I0218 17:59:00.474941 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9512a70d-2793-4aac-bccc-4ed1d50aeb5b/watcher-api/0.log" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.569954 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:01 crc kubenswrapper[4812]: E0218 17:59:01.570424 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1d38e1-a064-42f2-aded-9ed28df9d76a" containerName="container-00" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.570762 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1d38e1-a064-42f2-aded-9ed28df9d76a" containerName="container-00" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.571037 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1d38e1-a064-42f2-aded-9ed28df9d76a" containerName="container-00" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.572769 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.628960 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.755463 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.755534 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.755780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gls7k\" (UniqueName: \"kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.858535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gls7k\" (UniqueName: \"kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.858700 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.858728 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.859177 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.859234 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.878209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gls7k\" (UniqueName: \"kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k\") pod \"certified-operators-l4dmg\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:01 crc kubenswrapper[4812]: I0218 17:59:01.891635 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:02 crc kubenswrapper[4812]: I0218 17:59:02.408244 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:03 crc kubenswrapper[4812]: I0218 17:59:03.324366 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerID="b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d" exitCode=0 Feb 18 17:59:03 crc kubenswrapper[4812]: I0218 17:59:03.324476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerDied","Data":"b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d"} Feb 18 17:59:03 crc kubenswrapper[4812]: I0218 17:59:03.325203 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerStarted","Data":"ca74c6a7ecb8ac42c555c87dde759c5551c4fc5c8baf3796dbba2384e01f3a96"} Feb 18 17:59:05 crc kubenswrapper[4812]: I0218 17:59:05.345151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerStarted","Data":"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108"} Feb 18 17:59:06 crc kubenswrapper[4812]: I0218 17:59:06.355797 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerID="ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108" exitCode=0 Feb 18 17:59:06 crc kubenswrapper[4812]: I0218 17:59:06.355857 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerDied","Data":"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108"} Feb 18 17:59:07 crc kubenswrapper[4812]: I0218 17:59:07.365613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerStarted","Data":"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a"} Feb 18 17:59:07 crc kubenswrapper[4812]: I0218 17:59:07.388809 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l4dmg" podStartSLOduration=2.945934407 podStartE2EDuration="6.388787919s" podCreationTimestamp="2026-02-18 17:59:01 +0000 UTC" firstStartedPulling="2026-02-18 17:59:03.326237032 +0000 UTC m=+5363.591847941" lastFinishedPulling="2026-02-18 17:59:06.769090544 +0000 UTC m=+5367.034701453" observedRunningTime="2026-02-18 17:59:07.38561216 +0000 UTC m=+5367.651223079" watchObservedRunningTime="2026-02-18 17:59:07.388787919 +0000 UTC m=+5367.654398828" Feb 18 17:59:11 crc kubenswrapper[4812]: I0218 17:59:11.892351 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:11 crc kubenswrapper[4812]: I0218 17:59:11.892961 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:11 crc kubenswrapper[4812]: I0218 17:59:11.998296 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:12 crc kubenswrapper[4812]: I0218 17:59:12.449481 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:12 crc kubenswrapper[4812]: I0218 17:59:12.496024 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.420865 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l4dmg" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="registry-server" containerID="cri-o://092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a" gracePeriod=2 Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.855651 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.980734 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gls7k\" (UniqueName: \"kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k\") pod \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.980879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content\") pod \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.981011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities\") pod \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\" (UID: \"e8c327a6-85e8-48b7-b005-e5432cc8fe02\") " Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.982391 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities" (OuterVolumeSpecName: "utilities") pod "e8c327a6-85e8-48b7-b005-e5432cc8fe02" (UID: "e8c327a6-85e8-48b7-b005-e5432cc8fe02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:59:14 crc kubenswrapper[4812]: I0218 17:59:14.988382 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k" (OuterVolumeSpecName: "kube-api-access-gls7k") pod "e8c327a6-85e8-48b7-b005-e5432cc8fe02" (UID: "e8c327a6-85e8-48b7-b005-e5432cc8fe02"). InnerVolumeSpecName "kube-api-access-gls7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.044742 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c327a6-85e8-48b7-b005-e5432cc8fe02" (UID: "e8c327a6-85e8-48b7-b005-e5432cc8fe02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.083982 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gls7k\" (UniqueName: \"kubernetes.io/projected/e8c327a6-85e8-48b7-b005-e5432cc8fe02-kube-api-access-gls7k\") on node \"crc\" DevicePath \"\"" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.084388 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.084401 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c327a6-85e8-48b7-b005-e5432cc8fe02-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.431836 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerID="092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a" exitCode=0 Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.431883 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerDied","Data":"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a"} Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.431906 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4dmg" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.431926 4812 scope.go:117] "RemoveContainer" containerID="092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.431913 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4dmg" event={"ID":"e8c327a6-85e8-48b7-b005-e5432cc8fe02","Type":"ContainerDied","Data":"ca74c6a7ecb8ac42c555c87dde759c5551c4fc5c8baf3796dbba2384e01f3a96"} Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.452038 4812 scope.go:117] "RemoveContainer" containerID="ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.470068 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.476608 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l4dmg"] Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.484224 4812 scope.go:117] "RemoveContainer" containerID="b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.528251 4812 scope.go:117] "RemoveContainer" containerID="092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a" Feb 18 17:59:15 crc kubenswrapper[4812]: E0218 17:59:15.528854 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a\": container with ID starting with 092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a not found: ID does not exist" containerID="092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.528935 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a"} err="failed to get container status \"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a\": rpc error: code = NotFound desc = could not find container \"092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a\": container with ID starting with 092679d7b7f62d2f1082091675ebc4883b8f27e2f97f5c3f647203571aff1f1a not found: ID does not exist" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.528988 4812 scope.go:117] "RemoveContainer" containerID="ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108" Feb 18 17:59:15 crc kubenswrapper[4812]: E0218 17:59:15.529358 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108\": container with ID starting with ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108 not found: ID does not exist" containerID="ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.529397 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108"} err="failed to get container status \"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108\": rpc error: code = NotFound desc = could not find container \"ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108\": container with ID starting with ab27fa8c5dd53311fae2dd1d9fd2f5737487e63221f639008b86a41eab5ff108 not found: ID does not exist" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.529423 4812 scope.go:117] "RemoveContainer" containerID="b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d" Feb 18 17:59:15 crc kubenswrapper[4812]: E0218 17:59:15.529656 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d\": container with ID starting with b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d not found: ID does not exist" containerID="b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d" Feb 18 17:59:15 crc kubenswrapper[4812]: I0218 17:59:15.529708 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d"} err="failed to get container status \"b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d\": rpc error: code = NotFound desc = could not find container \"b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d\": container with ID starting with b295a23234d9165498f4d121eab7b7252bca2150d73122a7d8f6a4d2198ace6d not found: ID does not exist" Feb 18 17:59:16 crc kubenswrapper[4812]: I0218 17:59:16.518984 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" path="/var/lib/kubelet/pods/e8c327a6-85e8-48b7-b005-e5432cc8fe02/volumes" Feb 18 17:59:26 crc kubenswrapper[4812]: I0218 17:59:26.808414 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/util/0.log" Feb 18 17:59:26 crc kubenswrapper[4812]: I0218 17:59:26.963398 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/util/0.log" Feb 18 17:59:26 crc kubenswrapper[4812]: I0218 17:59:26.990778 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/pull/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.006464 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/pull/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.165723 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/util/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.167146 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/pull/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.173538 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0b71fbec7d81f960242045a83946a623bc1261d5854a58e4e7f3552e21xvrlp_34d55cc3-88ce-4b05-8837-9a3f25cd4570/extract/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.511892 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-wq29b_0f46711f-425e-4dbb-8a5d-ed6084adfde8/manager/0.log" Feb 18 17:59:27 crc kubenswrapper[4812]: I0218 17:59:27.855282 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-q4s5n_196a8044-f16c-465d-a1e4-e1e6703bf050/manager/0.log" Feb 18 17:59:28 crc kubenswrapper[4812]: I0218 17:59:28.078038 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-qfk95_c5c31acb-2c1f-4923-833a-68de35fb9d54/manager/0.log" Feb 18 17:59:28 crc kubenswrapper[4812]: I0218 17:59:28.272477 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-dtgqt_ab3c44e4-8127-4e37-a4be-44e1b85ef218/manager/0.log" Feb 18 17:59:28 crc kubenswrapper[4812]: I0218 17:59:28.760451 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-fgsbn_1def9ee0-6aa7-4cc0-a709-a66e4c952d03/manager/0.log" Feb 18 17:59:29 crc kubenswrapper[4812]: I0218 17:59:29.025764 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cwrzs_08ea33ce-0d14-439c-9e63-f06d21d6907a/manager/0.log" Feb 18 17:59:29 crc kubenswrapper[4812]: I0218 17:59:29.544442 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-57f4l_8717609a-7f7e-4de2-b0ec-93cc0539c922/manager/0.log" Feb 18 17:59:29 crc kubenswrapper[4812]: I0218 17:59:29.669965 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-mnln4_fab34061-20c2-4e93-b9fb-3d8a62ffdb72/manager/0.log" Feb 18 17:59:29 crc kubenswrapper[4812]: I0218 17:59:29.819579 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-zw426_78fc7ff4-fa73-4323-9756-db5902a66158/manager/0.log" Feb 18 17:59:29 crc kubenswrapper[4812]: I0218 17:59:29.905665 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-5v97q_7bd80a27-b40d-4a43-8956-01e91ba58029/manager/0.log" Feb 18 17:59:30 crc kubenswrapper[4812]: I0218 17:59:30.199061 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-s886f_bb6257e0-6420-4136-858f-ee944d0493e3/manager/0.log" Feb 18 17:59:30 crc kubenswrapper[4812]: I0218 17:59:30.270631 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-7t4r7_8dde41a0-6a01-4fdf-afe1-caf72e221917/manager/0.log" Feb 18 17:59:30 crc kubenswrapper[4812]: I0218 17:59:30.567537 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9csgcwk_8cfe2837-e258-42f2-8634-f20c3142d708/manager/0.log" Feb 18 17:59:31 crc kubenswrapper[4812]: I0218 17:59:31.077519 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7488c4c4f-csxg7_7247ef5f-7aa8-4b7c-a9bd-20e50002b7cb/operator/0.log" Feb 18 17:59:31 crc kubenswrapper[4812]: I0218 17:59:31.153382 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ph7db_1d7285f1-1635-4793-9ed6-1eff7ac4153b/registry-server/0.log" Feb 18 17:59:31 crc kubenswrapper[4812]: I0218 17:59:31.400269 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-svznc_07b7334e-7887-47ab-b54a-950e0abef136/manager/0.log" Feb 18 17:59:31 crc kubenswrapper[4812]: I0218 17:59:31.681247 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-8ts5w_d9f004a9-719f-44da-8afc-8d107e751740/manager/0.log" Feb 18 17:59:32 crc kubenswrapper[4812]: I0218 17:59:32.377332 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-njc66_66bb936b-e65a-4f8a-8e24-3066bb11f30e/operator/0.log" Feb 18 17:59:32 crc kubenswrapper[4812]: I0218 17:59:32.587054 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-ths6j_77a78e58-327b-41c9-9476-ed0c0d665938/manager/0.log" Feb 18 17:59:32 crc kubenswrapper[4812]: I0218 17:59:32.974268 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-gpk4n_77dd6c28-0191-413f-90f0-9c85b340dd9c/manager/0.log" Feb 18 17:59:32 crc kubenswrapper[4812]: I0218 17:59:32.988344 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d47b7586b-kpwkf_7ee716d3-9aa5-4c80-872a-7183662658a1/manager/0.log" Feb 18 17:59:33 crc kubenswrapper[4812]: I0218 17:59:33.004633 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-kztm9_1992f7af-ff5e-4b9d-9820-134811e95a33/manager/0.log" Feb 18 17:59:33 crc kubenswrapper[4812]: I0218 17:59:33.126890 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-gvlj5_d2a5bf35-89b6-4fee-94e4-d118f9cfacc3/manager/0.log" Feb 18 17:59:33 crc kubenswrapper[4812]: I0218 17:59:33.278486 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-55ccccfbc7-nmczt_97e0541c-504a-4610-b930-db20a8c00302/manager/0.log" Feb 18 17:59:38 crc kubenswrapper[4812]: I0218 17:59:38.303932 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-rns6l_aaa3e7f7-66d3-4e53-8cf7-f70ec1736efa/manager/0.log" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.937963 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 17:59:39 crc kubenswrapper[4812]: E0218 17:59:39.938997 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="registry-server" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.939018 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="registry-server" Feb 18 17:59:39 crc kubenswrapper[4812]: E0218 17:59:39.939050 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="extract-utilities" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.939059 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="extract-utilities" Feb 18 17:59:39 crc kubenswrapper[4812]: E0218 17:59:39.939113 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="extract-content" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.939123 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="extract-content" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.939404 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c327a6-85e8-48b7-b005-e5432cc8fe02" containerName="registry-server" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.941474 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:39 crc kubenswrapper[4812]: I0218 17:59:39.958132 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.043671 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9knk\" (UniqueName: \"kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.043897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.044468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.146895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9knk\" (UniqueName: \"kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.147002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.147160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.147524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.147602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.168174 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9knk\" (UniqueName: \"kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk\") pod \"redhat-operators-4nb49\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.279440 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:40 crc kubenswrapper[4812]: I0218 17:59:40.804169 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 17:59:41 crc kubenswrapper[4812]: I0218 17:59:41.668173 4812 generic.go:334] "Generic (PLEG): container finished" podID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerID="36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935" exitCode=0 Feb 18 17:59:41 crc kubenswrapper[4812]: I0218 17:59:41.668259 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerDied","Data":"36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935"} Feb 18 17:59:41 crc kubenswrapper[4812]: I0218 17:59:41.668455 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerStarted","Data":"d33c51e16494949f08c69086e119bdfda7e54f16bb1c6a1f1276ebc5bdf4af55"} Feb 18 17:59:42 crc kubenswrapper[4812]: I0218 17:59:42.681699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerStarted","Data":"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93"} Feb 18 17:59:47 crc kubenswrapper[4812]: I0218 17:59:47.761128 4812 generic.go:334] "Generic (PLEG): container finished" podID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerID="2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93" exitCode=0 Feb 18 17:59:47 crc kubenswrapper[4812]: I0218 17:59:47.761659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerDied","Data":"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93"} Feb 18 17:59:48 crc kubenswrapper[4812]: I0218 17:59:48.772348 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerStarted","Data":"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1"} Feb 18 17:59:48 crc kubenswrapper[4812]: I0218 17:59:48.796991 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4nb49" podStartSLOduration=3.267411416 podStartE2EDuration="9.796973274s" podCreationTimestamp="2026-02-18 17:59:39 +0000 UTC" firstStartedPulling="2026-02-18 17:59:41.669992616 +0000 UTC m=+5401.935603525" lastFinishedPulling="2026-02-18 17:59:48.199554464 +0000 UTC m=+5408.465165383" observedRunningTime="2026-02-18 17:59:48.78962959 +0000 UTC m=+5409.055240499" watchObservedRunningTime="2026-02-18 17:59:48.796973274 +0000 UTC m=+5409.062584183" Feb 18 17:59:50 crc kubenswrapper[4812]: I0218 17:59:50.279916 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:50 crc kubenswrapper[4812]: I0218 17:59:50.280226 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 17:59:51 crc kubenswrapper[4812]: I0218 17:59:51.329663 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4nb49" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" probeResult="failure" output=< Feb 18 17:59:51 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 17:59:51 crc kubenswrapper[4812]: > Feb 18 17:59:55 crc kubenswrapper[4812]: I0218 17:59:55.174371 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l4krn_a4d4aac9-ea9e-4c6c-a806-1fab4fe8f20d/control-plane-machine-set-operator/0.log" Feb 18 17:59:55 crc kubenswrapper[4812]: I0218 17:59:55.357429 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r748z_7f8a50e5-17af-449c-9e9f-ff051ba9c99f/kube-rbac-proxy/0.log" Feb 18 17:59:55 crc kubenswrapper[4812]: I0218 17:59:55.398453 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r748z_7f8a50e5-17af-449c-9e9f-ff051ba9c99f/machine-api-operator/0.log" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.148117 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww"] Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.149787 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.153293 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.153502 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.175335 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww"] Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.244706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd4zl\" (UniqueName: \"kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.244822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.244861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.347873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd4zl\" (UniqueName: \"kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.348000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.348037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.351077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.355127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.369552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd4zl\" (UniqueName: \"kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl\") pod \"collect-profiles-29523960-7kvww\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.474389 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:00 crc kubenswrapper[4812]: I0218 18:00:00.911986 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww"] Feb 18 18:00:01 crc kubenswrapper[4812]: I0218 18:00:01.325693 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4nb49" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" probeResult="failure" output=< Feb 18 18:00:01 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 18 18:00:01 crc kubenswrapper[4812]: > Feb 18 18:00:01 crc kubenswrapper[4812]: I0218 18:00:01.887083 4812 generic.go:334] "Generic (PLEG): container finished" podID="375fea5a-bafd-4076-bc7c-abbef843e3d6" containerID="3b6eb1541debe82a57bced74a11b8733798ec1bc3e71a00143d11b9da270bec7" exitCode=0 Feb 18 18:00:01 crc kubenswrapper[4812]: I0218 18:00:01.887132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" event={"ID":"375fea5a-bafd-4076-bc7c-abbef843e3d6","Type":"ContainerDied","Data":"3b6eb1541debe82a57bced74a11b8733798ec1bc3e71a00143d11b9da270bec7"} Feb 18 18:00:01 crc kubenswrapper[4812]: I0218 18:00:01.887159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" event={"ID":"375fea5a-bafd-4076-bc7c-abbef843e3d6","Type":"ContainerStarted","Data":"8e701e559b4861c3857caf70066b7fc99c9e83c3fce5ccdc0e7ac5d66ec8b10e"} Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.248032 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.307603 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd4zl\" (UniqueName: \"kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl\") pod \"375fea5a-bafd-4076-bc7c-abbef843e3d6\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.307796 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume\") pod \"375fea5a-bafd-4076-bc7c-abbef843e3d6\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.307824 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume\") pod \"375fea5a-bafd-4076-bc7c-abbef843e3d6\" (UID: \"375fea5a-bafd-4076-bc7c-abbef843e3d6\") " Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.308378 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "375fea5a-bafd-4076-bc7c-abbef843e3d6" (UID: "375fea5a-bafd-4076-bc7c-abbef843e3d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.313391 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "375fea5a-bafd-4076-bc7c-abbef843e3d6" (UID: "375fea5a-bafd-4076-bc7c-abbef843e3d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.313936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl" (OuterVolumeSpecName: "kube-api-access-zd4zl") pod "375fea5a-bafd-4076-bc7c-abbef843e3d6" (UID: "375fea5a-bafd-4076-bc7c-abbef843e3d6"). InnerVolumeSpecName "kube-api-access-zd4zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.410243 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd4zl\" (UniqueName: \"kubernetes.io/projected/375fea5a-bafd-4076-bc7c-abbef843e3d6-kube-api-access-zd4zl\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.410283 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/375fea5a-bafd-4076-bc7c-abbef843e3d6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.410294 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/375fea5a-bafd-4076-bc7c-abbef843e3d6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.909864 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" event={"ID":"375fea5a-bafd-4076-bc7c-abbef843e3d6","Type":"ContainerDied","Data":"8e701e559b4861c3857caf70066b7fc99c9e83c3fce5ccdc0e7ac5d66ec8b10e"} Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.909894 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523960-7kvww" Feb 18 18:00:03 crc kubenswrapper[4812]: I0218 18:00:03.909909 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e701e559b4861c3857caf70066b7fc99c9e83c3fce5ccdc0e7ac5d66ec8b10e" Feb 18 18:00:04 crc kubenswrapper[4812]: I0218 18:00:04.325164 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm"] Feb 18 18:00:04 crc kubenswrapper[4812]: I0218 18:00:04.334635 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523915-vrclm"] Feb 18 18:00:04 crc kubenswrapper[4812]: I0218 18:00:04.521133 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76251634-ff4b-4bbe-a040-05f7b8118ec4" path="/var/lib/kubelet/pods/76251634-ff4b-4bbe-a040-05f7b8118ec4/volumes" Feb 18 18:00:08 crc kubenswrapper[4812]: I0218 18:00:08.495781 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h7j7l_35d9ea4b-c563-487f-ab95-bfb14d853e68/cert-manager-controller/0.log" Feb 18 18:00:08 crc kubenswrapper[4812]: I0218 18:00:08.614921 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wk4sl_67a0e83c-d0b4-4eb0-9525-3a4c502073d8/cert-manager-cainjector/0.log" Feb 18 18:00:08 crc kubenswrapper[4812]: I0218 18:00:08.672815 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qln2f_08fea773-a7c8-4ba7-94dd-3d28d98dea63/cert-manager-webhook/0.log" Feb 18 18:00:10 crc kubenswrapper[4812]: I0218 18:00:10.343615 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 18:00:10 crc kubenswrapper[4812]: I0218 18:00:10.429779 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 18:00:11 crc kubenswrapper[4812]: I0218 18:00:11.135890 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 18:00:11 crc kubenswrapper[4812]: I0218 18:00:11.976066 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4nb49" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" containerID="cri-o://f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1" gracePeriod=2 Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.451533 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.584647 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content\") pod \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.585363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities\") pod \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.585481 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9knk\" (UniqueName: \"kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk\") pod \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\" (UID: \"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a\") " Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.586232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities" (OuterVolumeSpecName: "utilities") pod "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" (UID: "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.587569 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.591583 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk" (OuterVolumeSpecName: "kube-api-access-l9knk") pod "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" (UID: "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a"). InnerVolumeSpecName "kube-api-access-l9knk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.689758 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9knk\" (UniqueName: \"kubernetes.io/projected/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-kube-api-access-l9knk\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.730494 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" (UID: "5e8b41f3-ac27-4b79-a7b6-1232a86edb9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.792328 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.991582 4812 generic.go:334] "Generic (PLEG): container finished" podID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerID="f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1" exitCode=0 Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.991645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerDied","Data":"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1"} Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.991682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4nb49" event={"ID":"5e8b41f3-ac27-4b79-a7b6-1232a86edb9a","Type":"ContainerDied","Data":"d33c51e16494949f08c69086e119bdfda7e54f16bb1c6a1f1276ebc5bdf4af55"} Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.991704 4812 scope.go:117] "RemoveContainer" containerID="f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1" Feb 18 18:00:12 crc kubenswrapper[4812]: I0218 18:00:12.991836 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4nb49" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.015572 4812 scope.go:117] "RemoveContainer" containerID="2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.033039 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.040579 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4nb49"] Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.061034 4812 scope.go:117] "RemoveContainer" containerID="36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.084505 4812 scope.go:117] "RemoveContainer" containerID="f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1" Feb 18 18:00:13 crc kubenswrapper[4812]: E0218 18:00:13.084968 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1\": container with ID starting with f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1 not found: ID does not exist" containerID="f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.085016 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1"} err="failed to get container status \"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1\": rpc error: code = NotFound desc = could not find container \"f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1\": container with ID starting with f55f25ee2527b162045d7425af7c7caf5bbabf9a84a81fe86a4744d30664e4c1 not found: ID does not exist" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.085040 4812 scope.go:117] "RemoveContainer" containerID="2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93" Feb 18 18:00:13 crc kubenswrapper[4812]: E0218 18:00:13.085490 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93\": container with ID starting with 2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93 not found: ID does not exist" containerID="2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.085518 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93"} err="failed to get container status \"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93\": rpc error: code = NotFound desc = could not find container \"2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93\": container with ID starting with 2d248f3f09c4321b7e7eb5f7485fe7a13e593e8eb9db92a301eb2711965a8e93 not found: ID does not exist" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.085547 4812 scope.go:117] "RemoveContainer" containerID="36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935" Feb 18 18:00:13 crc kubenswrapper[4812]: E0218 18:00:13.085856 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935\": container with ID starting with 36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935 not found: ID does not exist" containerID="36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935" Feb 18 18:00:13 crc kubenswrapper[4812]: I0218 18:00:13.085878 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935"} err="failed to get container status \"36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935\": rpc error: code = NotFound desc = could not find container \"36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935\": container with ID starting with 36c4a1bf8aa2d705e0ede52ac5be1e284c6c379ad3215e295bf9331ea6fa0935 not found: ID does not exist" Feb 18 18:00:14 crc kubenswrapper[4812]: I0218 18:00:14.522810 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" path="/var/lib/kubelet/pods/5e8b41f3-ac27-4b79-a7b6-1232a86edb9a/volumes" Feb 18 18:00:17 crc kubenswrapper[4812]: I0218 18:00:17.361127 4812 scope.go:117] "RemoveContainer" containerID="bfb122ec8d3d471c23f35fed2d5a3ce5c84c0826faf8d8e06da37e47e9f3e803" Feb 18 18:00:21 crc kubenswrapper[4812]: I0218 18:00:21.437617 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-7s5pc_1a0e4522-b91b-47b6-a36c-902c8e98a845/nmstate-console-plugin/0.log" Feb 18 18:00:21 crc kubenswrapper[4812]: I0218 18:00:21.617967 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-f446z_6a4fd47f-29ab-45bd-86d8-865d91e44d02/nmstate-handler/0.log" Feb 18 18:00:21 crc kubenswrapper[4812]: I0218 18:00:21.677641 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-k5jtp_4eeb831e-7c1b-4a4b-ab6f-de2702714fa7/kube-rbac-proxy/0.log" Feb 18 18:00:21 crc kubenswrapper[4812]: I0218 18:00:21.703564 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-k5jtp_4eeb831e-7c1b-4a4b-ab6f-de2702714fa7/nmstate-metrics/0.log" Feb 18 18:00:22 crc kubenswrapper[4812]: I0218 18:00:22.054221 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-wj667_7f47ac3a-7734-4e9b-8ce2-bc31cc0b6d58/nmstate-operator/0.log" Feb 18 18:00:22 crc kubenswrapper[4812]: I0218 18:00:22.091666 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-stnd4_c3fe979f-501c-40cb-a1c4-e84fb119d112/nmstate-webhook/0.log" Feb 18 18:00:33 crc kubenswrapper[4812]: I0218 18:00:33.413727 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:00:33 crc kubenswrapper[4812]: I0218 18:00:33.417565 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:00:35 crc kubenswrapper[4812]: I0218 18:00:35.080858 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cdpj2_c18e9953-e57b-4c8e-832e-a8a62a1b00d4/prometheus-operator/0.log" Feb 18 18:00:35 crc kubenswrapper[4812]: I0218 18:00:35.238913 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56557b685c-gmbbr_cf2063af-e1c3-4d59-8aed-39615ddeab3e/prometheus-operator-admission-webhook/0.log" Feb 18 18:00:35 crc kubenswrapper[4812]: I0218 18:00:35.284702 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56557b685c-p25pz_3a5d61a2-337d-4f14-ba0f-e1625e17d85b/prometheus-operator-admission-webhook/0.log" Feb 18 18:00:35 crc kubenswrapper[4812]: I0218 18:00:35.427023 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-8l5sf_38d2ae21-5a2d-42e7-8beb-e03bc7354dbe/operator/0.log" Feb 18 18:00:35 crc kubenswrapper[4812]: I0218 18:00:35.462691 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-g467b_7b7793e3-e91d-4d48-bacc-bdfd155dbc78/perses-operator/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.545466 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8dwvz_46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4/kube-rbac-proxy/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.583732 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8dwvz_46d04cd0-c5e3-4d50-a2be-e0e6e4070ba4/controller/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.704258 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-frr-files/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.884554 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-frr-files/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.889650 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-metrics/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.889805 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-reloader/0.log" Feb 18 18:00:48 crc kubenswrapper[4812]: I0218 18:00:48.934266 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-reloader/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.095112 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-frr-files/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.096554 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-reloader/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.110104 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-metrics/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.111452 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-metrics/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.297349 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-frr-files/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.299183 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-metrics/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.323438 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/cp-reloader/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.326399 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/controller/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.478646 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/frr-metrics/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.487394 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/kube-rbac-proxy/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.494496 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/kube-rbac-proxy-frr/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.629010 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/reloader/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.714246 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-n67c7_8bb5b55f-c9d6-4e8e-b8f1-298dd16a0896/frr-k8s-webhook-server/0.log" Feb 18 18:00:49 crc kubenswrapper[4812]: I0218 18:00:49.930368 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8495744bb9-sfmg9_59215e89-cf30-4ef8-ab0d-fe665a3b2d70/manager/0.log" Feb 18 18:00:50 crc kubenswrapper[4812]: I0218 18:00:50.089706 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84fb85c775-sdk9c_ce82a09a-a70f-41f4-a4d6-15c1edc08d5e/webhook-server/0.log" Feb 18 18:00:50 crc kubenswrapper[4812]: I0218 18:00:50.152362 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7ctsm_77b4590b-339b-49df-b28e-88be89a335d1/kube-rbac-proxy/0.log" Feb 18 18:00:50 crc kubenswrapper[4812]: I0218 18:00:50.820192 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7ctsm_77b4590b-339b-49df-b28e-88be89a335d1/speaker/0.log" Feb 18 18:00:51 crc kubenswrapper[4812]: I0218 18:00:51.139969 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7w2pt_0ac2deeb-2838-428e-b648-0f9ea2d0aed5/frr/0.log" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.163672 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523961-7pm6h"] Feb 18 18:01:00 crc kubenswrapper[4812]: E0218 18:01:00.164686 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="extract-content" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.164704 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="extract-content" Feb 18 18:01:00 crc kubenswrapper[4812]: E0218 18:01:00.164722 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375fea5a-bafd-4076-bc7c-abbef843e3d6" containerName="collect-profiles" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.164731 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="375fea5a-bafd-4076-bc7c-abbef843e3d6" containerName="collect-profiles" Feb 18 18:01:00 crc kubenswrapper[4812]: E0218 18:01:00.164746 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.164754 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" Feb 18 18:01:00 crc kubenswrapper[4812]: E0218 18:01:00.164771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="extract-utilities" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.164779 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="extract-utilities" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.165006 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8b41f3-ac27-4b79-a7b6-1232a86edb9a" containerName="registry-server" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.165028 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="375fea5a-bafd-4076-bc7c-abbef843e3d6" containerName="collect-profiles" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.165904 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.184120 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523961-7pm6h"] Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.212939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7x9\" (UniqueName: \"kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.213223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.213277 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.213331 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.315021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.315086 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.315151 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.315254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q7x9\" (UniqueName: \"kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.321874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.326644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.331468 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.339063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q7x9\" (UniqueName: \"kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9\") pod \"keystone-cron-29523961-7pm6h\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.490752 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:00 crc kubenswrapper[4812]: I0218 18:01:00.926670 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523961-7pm6h"] Feb 18 18:01:01 crc kubenswrapper[4812]: I0218 18:01:01.494425 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523961-7pm6h" event={"ID":"293b3489-c952-4f15-a82c-a3a5e668ae86","Type":"ContainerStarted","Data":"446f87b2ca255f0e9fe6022a6cbb1de95907d35716d559772c68bca78f4829e3"} Feb 18 18:01:01 crc kubenswrapper[4812]: I0218 18:01:01.494483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523961-7pm6h" event={"ID":"293b3489-c952-4f15-a82c-a3a5e668ae86","Type":"ContainerStarted","Data":"7abd38429f5fea2d79c3b0109bc13a3c5281b028675080a44a9128c23c25b4e4"} Feb 18 18:01:01 crc kubenswrapper[4812]: I0218 18:01:01.517700 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523961-7pm6h" podStartSLOduration=1.517681868 podStartE2EDuration="1.517681868s" podCreationTimestamp="2026-02-18 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 18:01:01.512086078 +0000 UTC m=+5481.777696987" watchObservedRunningTime="2026-02-18 18:01:01.517681868 +0000 UTC m=+5481.783292767" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.181876 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/util/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.366724 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/pull/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.376344 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/pull/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.378439 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/util/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.413537 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.413603 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.568745 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/extract/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.572373 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/util/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.637316 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0899r8p_df5ec246-8380-4818-8e51-36ab37833c23/pull/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.752187 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/util/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.887057 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/pull/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.887125 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/pull/0.log" Feb 18 18:01:03 crc kubenswrapper[4812]: I0218 18:01:03.933441 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/util/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.117157 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/extract/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.148756 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/pull/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.174156 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jtkxn_ec6d2ba5-6719-472f-a1b5-e5d0bd746608/util/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: E0218 18:01:04.249589 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod293b3489_c952_4f15_a82c_a3a5e668ae86.slice/crio-conmon-446f87b2ca255f0e9fe6022a6cbb1de95907d35716d559772c68bca78f4829e3.scope\": RecentStats: unable to find data in memory cache]" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.343321 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-utilities/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.426531 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-content/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.434135 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-utilities/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.481707 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-content/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.519819 4812 generic.go:334] "Generic (PLEG): container finished" podID="293b3489-c952-4f15-a82c-a3a5e668ae86" containerID="446f87b2ca255f0e9fe6022a6cbb1de95907d35716d559772c68bca78f4829e3" exitCode=0 Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.519901 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523961-7pm6h" event={"ID":"293b3489-c952-4f15-a82c-a3a5e668ae86","Type":"ContainerDied","Data":"446f87b2ca255f0e9fe6022a6cbb1de95907d35716d559772c68bca78f4829e3"} Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.676296 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-utilities/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.680862 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/extract-content/0.log" Feb 18 18:01:04 crc kubenswrapper[4812]: I0218 18:01:04.923499 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-utilities/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.142237 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-content/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.216931 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-utilities/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.244003 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-content/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.410330 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88k9b_945dcf1c-04b0-4c76-9261-19d57706f47e/registry-server/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.441637 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-utilities/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.448482 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/extract-content/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.701795 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/util/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.890912 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.910501 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/util/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.920805 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/pull/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.925612 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/pull/0.log" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.930075 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys\") pod \"293b3489-c952-4f15-a82c-a3a5e668ae86\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.930234 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data\") pod \"293b3489-c952-4f15-a82c-a3a5e668ae86\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.930375 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q7x9\" (UniqueName: \"kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9\") pod \"293b3489-c952-4f15-a82c-a3a5e668ae86\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.930427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle\") pod \"293b3489-c952-4f15-a82c-a3a5e668ae86\" (UID: \"293b3489-c952-4f15-a82c-a3a5e668ae86\") " Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.950401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9" (OuterVolumeSpecName: "kube-api-access-4q7x9") pod "293b3489-c952-4f15-a82c-a3a5e668ae86" (UID: "293b3489-c952-4f15-a82c-a3a5e668ae86"). InnerVolumeSpecName "kube-api-access-4q7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.952271 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "293b3489-c952-4f15-a82c-a3a5e668ae86" (UID: "293b3489-c952-4f15-a82c-a3a5e668ae86"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 18:01:05 crc kubenswrapper[4812]: I0218 18:01:05.972832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "293b3489-c952-4f15-a82c-a3a5e668ae86" (UID: "293b3489-c952-4f15-a82c-a3a5e668ae86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.003265 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data" (OuterVolumeSpecName: "config-data") pod "293b3489-c952-4f15-a82c-a3a5e668ae86" (UID: "293b3489-c952-4f15-a82c-a3a5e668ae86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.032142 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q7x9\" (UniqueName: \"kubernetes.io/projected/293b3489-c952-4f15-a82c-a3a5e668ae86-kube-api-access-4q7x9\") on node \"crc\" DevicePath \"\"" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.032177 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.032190 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.032201 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/293b3489-c952-4f15-a82c-a3a5e668ae86-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.117915 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/pull/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.146951 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/util/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.165526 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaths46_a3bb6960-2b83-4e82-86d3-5ada0d7be18a/extract/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.201125 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lmj92_170cd4cd-fb98-45b4-a075-3ded1e2fb964/registry-server/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.349618 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-p5ppf_083e70e9-e72b-4e1b-a398-ebe2c7610368/marketplace-operator/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.368574 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-utilities/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.536939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523961-7pm6h" event={"ID":"293b3489-c952-4f15-a82c-a3a5e668ae86","Type":"ContainerDied","Data":"7abd38429f5fea2d79c3b0109bc13a3c5281b028675080a44a9128c23c25b4e4"} Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.537255 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7abd38429f5fea2d79c3b0109bc13a3c5281b028675080a44a9128c23c25b4e4" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.536993 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523961-7pm6h" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.543410 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-content/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.554872 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-utilities/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.567733 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-content/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.747208 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-utilities/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.778433 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/extract-content/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.889377 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g67qg_bb36f508-805c-42ff-94ce-25f8739f66ed/registry-server/0.log" Feb 18 18:01:06 crc kubenswrapper[4812]: I0218 18:01:06.970143 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-utilities/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.089674 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-utilities/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.098740 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-content/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.110773 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-content/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.263613 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-utilities/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.268917 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/extract-content/0.log" Feb 18 18:01:07 crc kubenswrapper[4812]: I0218 18:01:07.884042 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk4mz_05cfe267-0637-43ce-8c4b-393fe990136d/registry-server/0.log" Feb 18 18:01:19 crc kubenswrapper[4812]: I0218 18:01:19.459138 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56557b685c-gmbbr_cf2063af-e1c3-4d59-8aed-39615ddeab3e/prometheus-operator-admission-webhook/0.log" Feb 18 18:01:19 crc kubenswrapper[4812]: I0218 18:01:19.462399 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56557b685c-p25pz_3a5d61a2-337d-4f14-ba0f-e1625e17d85b/prometheus-operator-admission-webhook/0.log" Feb 18 18:01:19 crc kubenswrapper[4812]: I0218 18:01:19.485934 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cdpj2_c18e9953-e57b-4c8e-832e-a8a62a1b00d4/prometheus-operator/0.log" Feb 18 18:01:19 crc kubenswrapper[4812]: I0218 18:01:19.646143 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-g467b_7b7793e3-e91d-4d48-bacc-bdfd155dbc78/perses-operator/0.log" Feb 18 18:01:19 crc kubenswrapper[4812]: I0218 18:01:19.662558 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-8l5sf_38d2ae21-5a2d-42e7-8beb-e03bc7354dbe/operator/0.log" Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.413524 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.413927 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.413985 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.414739 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.414790 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521" gracePeriod=600 Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.790289 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521" exitCode=0 Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.790367 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521"} Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.790568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896"} Feb 18 18:01:33 crc kubenswrapper[4812]: I0218 18:01:33.790589 4812 scope.go:117] "RemoveContainer" containerID="32f00bedb23525a69b9bf36f787aa94a6411ccddfd92e1143f7ba574d3a32177" Feb 18 18:01:40 crc kubenswrapper[4812]: E0218 18:01:40.296137 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:46460->38.102.83.106:36505: write tcp 38.102.83.106:46460->38.102.83.106:36505: write: broken pipe Feb 18 18:03:11 crc kubenswrapper[4812]: I0218 18:03:11.865901 4812 generic.go:334] "Generic (PLEG): container finished" podID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerID="712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7" exitCode=0 Feb 18 18:03:11 crc kubenswrapper[4812]: I0218 18:03:11.866031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q556p/must-gather-mfh22" event={"ID":"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9","Type":"ContainerDied","Data":"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7"} Feb 18 18:03:11 crc kubenswrapper[4812]: I0218 18:03:11.867111 4812 scope.go:117] "RemoveContainer" containerID="712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7" Feb 18 18:03:12 crc kubenswrapper[4812]: I0218 18:03:12.001242 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-q556p_must-gather-mfh22_ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9/gather/0.log" Feb 18 18:03:14 crc kubenswrapper[4812]: E0218 18:03:14.579817 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:59160->38.102.83.106:36505: write tcp 38.102.83.106:59160->38.102.83.106:36505: write: broken pipe Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.170760 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q556p/must-gather-mfh22"] Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.171762 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-q556p/must-gather-mfh22" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="copy" containerID="cri-o://a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd" gracePeriod=2 Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.190961 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q556p/must-gather-mfh22"] Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.618768 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-q556p_must-gather-mfh22_ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9/copy/0.log" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.619379 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.716567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output\") pod \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.717164 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5q4m\" (UniqueName: \"kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m\") pod \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\" (UID: \"ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9\") " Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.738426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m" (OuterVolumeSpecName: "kube-api-access-s5q4m") pod "ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" (UID: "ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9"). InnerVolumeSpecName "kube-api-access-s5q4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.820586 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5q4m\" (UniqueName: \"kubernetes.io/projected/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-kube-api-access-s5q4m\") on node \"crc\" DevicePath \"\"" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.923690 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" (UID: "ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.993372 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-q556p_must-gather-mfh22_ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9/copy/0.log" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.993864 4812 generic.go:334] "Generic (PLEG): container finished" podID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerID="a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd" exitCode=143 Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.993956 4812 scope.go:117] "RemoveContainer" containerID="a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd" Feb 18 18:03:21 crc kubenswrapper[4812]: I0218 18:03:21.993993 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q556p/must-gather-mfh22" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.024443 4812 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.027500 4812 scope.go:117] "RemoveContainer" containerID="712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.155156 4812 scope.go:117] "RemoveContainer" containerID="a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd" Feb 18 18:03:22 crc kubenswrapper[4812]: E0218 18:03:22.156145 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd\": container with ID starting with a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd not found: ID does not exist" containerID="a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.156203 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd"} err="failed to get container status \"a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd\": rpc error: code = NotFound desc = could not find container \"a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd\": container with ID starting with a9535a33f9fa1061ae31a8402c21e6239c00c729ce8d7965c7490fed8b0c75fd not found: ID does not exist" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.156232 4812 scope.go:117] "RemoveContainer" containerID="712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7" Feb 18 18:03:22 crc kubenswrapper[4812]: E0218 18:03:22.156713 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7\": container with ID starting with 712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7 not found: ID does not exist" containerID="712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.156736 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7"} err="failed to get container status \"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7\": rpc error: code = NotFound desc = could not find container \"712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7\": container with ID starting with 712c03fe06aa2df90cdd5ecc6ebdefd41b39a64665ebc96631ff75ab225056a7 not found: ID does not exist" Feb 18 18:03:22 crc kubenswrapper[4812]: I0218 18:03:22.523340 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" path="/var/lib/kubelet/pods/ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9/volumes" Feb 18 18:03:33 crc kubenswrapper[4812]: I0218 18:03:33.413799 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:03:33 crc kubenswrapper[4812]: I0218 18:03:33.414422 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:04:03 crc kubenswrapper[4812]: I0218 18:04:03.413875 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:04:03 crc kubenswrapper[4812]: I0218 18:04:03.414821 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:04:17 crc kubenswrapper[4812]: I0218 18:04:17.506627 4812 scope.go:117] "RemoveContainer" containerID="a5af720ae881806c124e6585e8a51a6b8df477d072ed1a535801f85f6434ef06" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.413706 4812 patch_prober.go:28] interesting pod/machine-config-daemon-hhkxg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.414495 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.414574 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.415535 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896"} pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.415659 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerName="machine-config-daemon" containerID="cri-o://f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" gracePeriod=600 Feb 18 18:04:33 crc kubenswrapper[4812]: E0218 18:04:33.542989 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.791028 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bc4da39-1fda-4604-a089-b90b684c8a46" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" exitCode=0 Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.791160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerDied","Data":"f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896"} Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.791642 4812 scope.go:117] "RemoveContainer" containerID="416a69377730c6510cd7fb3b81926f0d10a5564653de48695a79436ca1cc6521" Feb 18 18:04:33 crc kubenswrapper[4812]: I0218 18:04:33.792767 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:04:33 crc kubenswrapper[4812]: E0218 18:04:33.793476 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:04:46 crc kubenswrapper[4812]: I0218 18:04:46.509037 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:04:46 crc kubenswrapper[4812]: E0218 18:04:46.510174 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:04:58 crc kubenswrapper[4812]: I0218 18:04:58.508533 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:04:58 crc kubenswrapper[4812]: E0218 18:04:58.509544 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:05:12 crc kubenswrapper[4812]: I0218 18:05:12.509021 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:05:12 crc kubenswrapper[4812]: E0218 18:05:12.510013 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:05:26 crc kubenswrapper[4812]: I0218 18:05:26.508159 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:05:26 crc kubenswrapper[4812]: E0218 18:05:26.509132 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:05:38 crc kubenswrapper[4812]: I0218 18:05:38.508546 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:05:38 crc kubenswrapper[4812]: E0218 18:05:38.509737 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:05:52 crc kubenswrapper[4812]: I0218 18:05:52.508261 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:05:52 crc kubenswrapper[4812]: E0218 18:05:52.509077 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.874928 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:05:56 crc kubenswrapper[4812]: E0218 18:05:56.875639 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="copy" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875651 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="copy" Feb 18 18:05:56 crc kubenswrapper[4812]: E0218 18:05:56.875669 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293b3489-c952-4f15-a82c-a3a5e668ae86" containerName="keystone-cron" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875675 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="293b3489-c952-4f15-a82c-a3a5e668ae86" containerName="keystone-cron" Feb 18 18:05:56 crc kubenswrapper[4812]: E0218 18:05:56.875706 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="gather" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875712 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="gather" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875908 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="293b3489-c952-4f15-a82c-a3a5e668ae86" containerName="keystone-cron" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875920 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="copy" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.875942 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae47f8dc-e2ec-4d0b-949a-b2ce2ed18eb9" containerName="gather" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.878489 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.894549 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.975757 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v476x\" (UniqueName: \"kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.976157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:56 crc kubenswrapper[4812]: I0218 18:05:56.976262 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.078324 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.078482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v476x\" (UniqueName: \"kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.078529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.079066 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.079069 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.104739 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v476x\" (UniqueName: \"kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x\") pod \"community-operators-hf6nz\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.209597 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:05:57 crc kubenswrapper[4812]: I0218 18:05:57.711036 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:05:58 crc kubenswrapper[4812]: I0218 18:05:58.664773 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerID="c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c" exitCode=0 Feb 18 18:05:58 crc kubenswrapper[4812]: I0218 18:05:58.664825 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerDied","Data":"c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c"} Feb 18 18:05:58 crc kubenswrapper[4812]: I0218 18:05:58.665060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerStarted","Data":"63c19478b197efe5be27d19ce4c9249c17af68506808fdeb89a37bde92a026a8"} Feb 18 18:05:58 crc kubenswrapper[4812]: I0218 18:05:58.667338 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.687512 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.691533 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.704587 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.844025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6b75\" (UniqueName: \"kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.844400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.844449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.946710 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6b75\" (UniqueName: \"kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.946780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.946826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.947266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.947426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:05:59 crc kubenswrapper[4812]: I0218 18:05:59.968630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6b75\" (UniqueName: \"kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75\") pod \"redhat-marketplace-4kjm9\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.024368 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.479481 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:06:00 crc kubenswrapper[4812]: W0218 18:06:00.482818 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d76884_b847_4e59_b0c8_e3f20352986c.slice/crio-4e2449105310a531d801859ca7c5e19024b8cd82cd79cc5031b5bbffe383bd23 WatchSource:0}: Error finding container 4e2449105310a531d801859ca7c5e19024b8cd82cd79cc5031b5bbffe383bd23: Status 404 returned error can't find the container with id 4e2449105310a531d801859ca7c5e19024b8cd82cd79cc5031b5bbffe383bd23 Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.698525 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerID="19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64" exitCode=0 Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.698601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerDied","Data":"19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64"} Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.698631 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerStarted","Data":"4e2449105310a531d801859ca7c5e19024b8cd82cd79cc5031b5bbffe383bd23"} Feb 18 18:06:00 crc kubenswrapper[4812]: I0218 18:06:00.701989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerStarted","Data":"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8"} Feb 18 18:06:01 crc kubenswrapper[4812]: I0218 18:06:01.715418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerStarted","Data":"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d"} Feb 18 18:06:01 crc kubenswrapper[4812]: I0218 18:06:01.717999 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerID="2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8" exitCode=0 Feb 18 18:06:01 crc kubenswrapper[4812]: I0218 18:06:01.718027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerDied","Data":"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8"} Feb 18 18:06:02 crc kubenswrapper[4812]: I0218 18:06:02.732302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerStarted","Data":"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b"} Feb 18 18:06:02 crc kubenswrapper[4812]: I0218 18:06:02.734063 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerID="0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d" exitCode=0 Feb 18 18:06:02 crc kubenswrapper[4812]: I0218 18:06:02.734145 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerDied","Data":"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d"} Feb 18 18:06:02 crc kubenswrapper[4812]: I0218 18:06:02.758531 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hf6nz" podStartSLOduration=3.077724471 podStartE2EDuration="6.758501462s" podCreationTimestamp="2026-02-18 18:05:56 +0000 UTC" firstStartedPulling="2026-02-18 18:05:58.667024951 +0000 UTC m=+5778.932635860" lastFinishedPulling="2026-02-18 18:06:02.347801922 +0000 UTC m=+5782.613412851" observedRunningTime="2026-02-18 18:06:02.750895362 +0000 UTC m=+5783.016506291" watchObservedRunningTime="2026-02-18 18:06:02.758501462 +0000 UTC m=+5783.024112411" Feb 18 18:06:03 crc kubenswrapper[4812]: I0218 18:06:03.748637 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerStarted","Data":"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099"} Feb 18 18:06:03 crc kubenswrapper[4812]: I0218 18:06:03.775232 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4kjm9" podStartSLOduration=2.27155121 podStartE2EDuration="4.775204476s" podCreationTimestamp="2026-02-18 18:05:59 +0000 UTC" firstStartedPulling="2026-02-18 18:06:00.700515321 +0000 UTC m=+5780.966126230" lastFinishedPulling="2026-02-18 18:06:03.204168557 +0000 UTC m=+5783.469779496" observedRunningTime="2026-02-18 18:06:03.77293552 +0000 UTC m=+5784.038546439" watchObservedRunningTime="2026-02-18 18:06:03.775204476 +0000 UTC m=+5784.040815425" Feb 18 18:06:04 crc kubenswrapper[4812]: I0218 18:06:04.508486 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:06:04 crc kubenswrapper[4812]: E0218 18:06:04.508822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:06:07 crc kubenswrapper[4812]: I0218 18:06:07.209836 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:07 crc kubenswrapper[4812]: I0218 18:06:07.210313 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:07 crc kubenswrapper[4812]: I0218 18:06:07.266588 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:07 crc kubenswrapper[4812]: I0218 18:06:07.852095 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:08 crc kubenswrapper[4812]: I0218 18:06:08.262337 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:06:09 crc kubenswrapper[4812]: I0218 18:06:09.814635 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hf6nz" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="registry-server" containerID="cri-o://a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b" gracePeriod=2 Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.025196 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.025770 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.101665 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.319130 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.469780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v476x\" (UniqueName: \"kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x\") pod \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.469887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities\") pod \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.469998 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content\") pod \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\" (UID: \"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3\") " Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.470691 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities" (OuterVolumeSpecName: "utilities") pod "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" (UID: "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.478412 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x" (OuterVolumeSpecName: "kube-api-access-v476x") pod "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" (UID: "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3"). InnerVolumeSpecName "kube-api-access-v476x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.544828 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" (UID: "b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.573826 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.573877 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v476x\" (UniqueName: \"kubernetes.io/projected/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-kube-api-access-v476x\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.573898 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.826393 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerID="a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b" exitCode=0 Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.826463 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hf6nz" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.826532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerDied","Data":"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b"} Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.826565 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hf6nz" event={"ID":"b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3","Type":"ContainerDied","Data":"63c19478b197efe5be27d19ce4c9249c17af68506808fdeb89a37bde92a026a8"} Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.826585 4812 scope.go:117] "RemoveContainer" containerID="a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.866740 4812 scope.go:117] "RemoveContainer" containerID="2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.871783 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.883877 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hf6nz"] Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.886019 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.892808 4812 scope.go:117] "RemoveContainer" containerID="c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.948872 4812 scope.go:117] "RemoveContainer" containerID="a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b" Feb 18 18:06:10 crc kubenswrapper[4812]: E0218 18:06:10.949439 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b\": container with ID starting with a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b not found: ID does not exist" containerID="a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.949491 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b"} err="failed to get container status \"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b\": rpc error: code = NotFound desc = could not find container \"a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b\": container with ID starting with a98bb9e43978770f7bc74a24294c2b5db6703bcb0446eb1c4a4614ac92f7ae9b not found: ID does not exist" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.949521 4812 scope.go:117] "RemoveContainer" containerID="2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8" Feb 18 18:06:10 crc kubenswrapper[4812]: E0218 18:06:10.949961 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8\": container with ID starting with 2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8 not found: ID does not exist" containerID="2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.950021 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8"} err="failed to get container status \"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8\": rpc error: code = NotFound desc = could not find container \"2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8\": container with ID starting with 2703727d3f7a1c76c02e8277b202b0daf5d19c42ae6f33a2b2465a8575d231d8 not found: ID does not exist" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.950054 4812 scope.go:117] "RemoveContainer" containerID="c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c" Feb 18 18:06:10 crc kubenswrapper[4812]: E0218 18:06:10.950340 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c\": container with ID starting with c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c not found: ID does not exist" containerID="c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c" Feb 18 18:06:10 crc kubenswrapper[4812]: I0218 18:06:10.950374 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c"} err="failed to get container status \"c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c\": rpc error: code = NotFound desc = could not find container \"c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c\": container with ID starting with c3591c165d6c063000a5b19ea5d3f5d4ff9692e5f2ec61bf7086e4ceb962431c not found: ID does not exist" Feb 18 18:06:12 crc kubenswrapper[4812]: I0218 18:06:12.462723 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:06:12 crc kubenswrapper[4812]: I0218 18:06:12.532060 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" path="/var/lib/kubelet/pods/b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3/volumes" Feb 18 18:06:13 crc kubenswrapper[4812]: I0218 18:06:13.862825 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4kjm9" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="registry-server" containerID="cri-o://9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099" gracePeriod=2 Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.383887 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.566753 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content\") pod \"e9d76884-b847-4e59-b0c8-e3f20352986c\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.567188 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6b75\" (UniqueName: \"kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75\") pod \"e9d76884-b847-4e59-b0c8-e3f20352986c\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.567333 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities\") pod \"e9d76884-b847-4e59-b0c8-e3f20352986c\" (UID: \"e9d76884-b847-4e59-b0c8-e3f20352986c\") " Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.568025 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities" (OuterVolumeSpecName: "utilities") pod "e9d76884-b847-4e59-b0c8-e3f20352986c" (UID: "e9d76884-b847-4e59-b0c8-e3f20352986c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.575602 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75" (OuterVolumeSpecName: "kube-api-access-x6b75") pod "e9d76884-b847-4e59-b0c8-e3f20352986c" (UID: "e9d76884-b847-4e59-b0c8-e3f20352986c"). InnerVolumeSpecName "kube-api-access-x6b75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.600543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9d76884-b847-4e59-b0c8-e3f20352986c" (UID: "e9d76884-b847-4e59-b0c8-e3f20352986c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.669417 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.669448 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d76884-b847-4e59-b0c8-e3f20352986c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.669461 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6b75\" (UniqueName: \"kubernetes.io/projected/e9d76884-b847-4e59-b0c8-e3f20352986c-kube-api-access-x6b75\") on node \"crc\" DevicePath \"\"" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.879480 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerID="9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099" exitCode=0 Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.879517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerDied","Data":"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099"} Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.879546 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kjm9" event={"ID":"e9d76884-b847-4e59-b0c8-e3f20352986c","Type":"ContainerDied","Data":"4e2449105310a531d801859ca7c5e19024b8cd82cd79cc5031b5bbffe383bd23"} Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.879561 4812 scope.go:117] "RemoveContainer" containerID="9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.879626 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kjm9" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.911844 4812 scope.go:117] "RemoveContainer" containerID="0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.932778 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.941502 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kjm9"] Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.954488 4812 scope.go:117] "RemoveContainer" containerID="19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.987344 4812 scope.go:117] "RemoveContainer" containerID="9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099" Feb 18 18:06:14 crc kubenswrapper[4812]: E0218 18:06:14.988040 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099\": container with ID starting with 9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099 not found: ID does not exist" containerID="9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.988077 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099"} err="failed to get container status \"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099\": rpc error: code = NotFound desc = could not find container \"9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099\": container with ID starting with 9bdecbae2426801f5abfd1082849716d63abda7c8facea67d834c35514708099 not found: ID does not exist" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.988115 4812 scope.go:117] "RemoveContainer" containerID="0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d" Feb 18 18:06:14 crc kubenswrapper[4812]: E0218 18:06:14.988516 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d\": container with ID starting with 0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d not found: ID does not exist" containerID="0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.988542 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d"} err="failed to get container status \"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d\": rpc error: code = NotFound desc = could not find container \"0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d\": container with ID starting with 0528191c1fd8171c680c1c67ba3e9beb2fec58d45c33929c452260f2bdf5f48d not found: ID does not exist" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.988558 4812 scope.go:117] "RemoveContainer" containerID="19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64" Feb 18 18:06:14 crc kubenswrapper[4812]: E0218 18:06:14.989080 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64\": container with ID starting with 19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64 not found: ID does not exist" containerID="19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64" Feb 18 18:06:14 crc kubenswrapper[4812]: I0218 18:06:14.989185 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64"} err="failed to get container status \"19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64\": rpc error: code = NotFound desc = could not find container \"19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64\": container with ID starting with 19f216a4cfb19f049e9c8c98e6d28c89a6927d1e504f03ad3415725ee2801a64 not found: ID does not exist" Feb 18 18:06:16 crc kubenswrapper[4812]: I0218 18:06:16.536165 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" path="/var/lib/kubelet/pods/e9d76884-b847-4e59-b0c8-e3f20352986c/volumes" Feb 18 18:06:18 crc kubenswrapper[4812]: I0218 18:06:18.509066 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:06:18 crc kubenswrapper[4812]: E0218 18:06:18.509685 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:06:31 crc kubenswrapper[4812]: I0218 18:06:31.510257 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:06:31 crc kubenswrapper[4812]: E0218 18:06:31.511606 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:06:44 crc kubenswrapper[4812]: I0218 18:06:44.509470 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:06:44 crc kubenswrapper[4812]: E0218 18:06:44.510602 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:06:55 crc kubenswrapper[4812]: I0218 18:06:55.508243 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:06:55 crc kubenswrapper[4812]: E0218 18:06:55.509493 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:07:06 crc kubenswrapper[4812]: I0218 18:07:06.508840 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:07:06 crc kubenswrapper[4812]: E0218 18:07:06.509731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:07:20 crc kubenswrapper[4812]: I0218 18:07:20.514316 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:07:20 crc kubenswrapper[4812]: E0218 18:07:20.515169 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:07:31 crc kubenswrapper[4812]: I0218 18:07:31.508034 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:07:31 crc kubenswrapper[4812]: E0218 18:07:31.509164 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:07:45 crc kubenswrapper[4812]: I0218 18:07:45.508312 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:07:45 crc kubenswrapper[4812]: E0218 18:07:45.509230 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:07:59 crc kubenswrapper[4812]: I0218 18:07:59.508991 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:07:59 crc kubenswrapper[4812]: E0218 18:07:59.510367 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:08:11 crc kubenswrapper[4812]: I0218 18:08:11.508433 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:08:11 crc kubenswrapper[4812]: E0218 18:08:11.509469 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:08:22 crc kubenswrapper[4812]: I0218 18:08:22.508853 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:08:22 crc kubenswrapper[4812]: E0218 18:08:22.509671 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:08:36 crc kubenswrapper[4812]: I0218 18:08:36.508678 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:08:36 crc kubenswrapper[4812]: E0218 18:08:36.509395 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:08:47 crc kubenswrapper[4812]: I0218 18:08:47.508737 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:08:47 crc kubenswrapper[4812]: E0218 18:08:47.509945 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:09:01 crc kubenswrapper[4812]: I0218 18:09:01.508981 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:09:01 crc kubenswrapper[4812]: E0218 18:09:01.509936 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.664619 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665674 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="extract-content" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665692 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="extract-content" Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665708 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665714 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665729 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="extract-utilities" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665736 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="extract-utilities" Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665753 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="extract-content" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665759 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="extract-content" Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665777 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: E0218 18:09:07.665787 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="extract-utilities" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.665793 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="extract-utilities" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.666026 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d76884-b847-4e59-b0c8-e3f20352986c" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.666055 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ab3ab5-2bb1-48d1-aa5d-c28e776eb8a3" containerName="registry-server" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.667541 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.691732 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.760460 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.761068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.761326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphbf\" (UniqueName: \"kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.863770 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.863883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphbf\" (UniqueName: \"kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.863979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.864611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.864610 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.887777 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphbf\" (UniqueName: \"kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf\") pod \"certified-operators-gbsh8\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:07 crc kubenswrapper[4812]: I0218 18:09:07.999161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:08 crc kubenswrapper[4812]: I0218 18:09:08.526157 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:08 crc kubenswrapper[4812]: I0218 18:09:08.727167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerStarted","Data":"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4"} Feb 18 18:09:08 crc kubenswrapper[4812]: I0218 18:09:08.727213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerStarted","Data":"4adb38251fa9e041882adf1de196809f8eef57d27ca1e04aa7a130cd31c3c266"} Feb 18 18:09:09 crc kubenswrapper[4812]: I0218 18:09:09.742937 4812 generic.go:334] "Generic (PLEG): container finished" podID="20914350-c50c-404e-b69a-8bebdc44118c" containerID="f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4" exitCode=0 Feb 18 18:09:09 crc kubenswrapper[4812]: I0218 18:09:09.744761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerDied","Data":"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4"} Feb 18 18:09:10 crc kubenswrapper[4812]: I0218 18:09:10.756791 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerStarted","Data":"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80"} Feb 18 18:09:12 crc kubenswrapper[4812]: I0218 18:09:12.774376 4812 generic.go:334] "Generic (PLEG): container finished" podID="20914350-c50c-404e-b69a-8bebdc44118c" containerID="b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80" exitCode=0 Feb 18 18:09:12 crc kubenswrapper[4812]: I0218 18:09:12.774461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerDied","Data":"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80"} Feb 18 18:09:13 crc kubenswrapper[4812]: I0218 18:09:13.789205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerStarted","Data":"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395"} Feb 18 18:09:13 crc kubenswrapper[4812]: I0218 18:09:13.812065 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbsh8" podStartSLOduration=3.396317364 podStartE2EDuration="6.812042424s" podCreationTimestamp="2026-02-18 18:09:07 +0000 UTC" firstStartedPulling="2026-02-18 18:09:09.746091531 +0000 UTC m=+5970.011702430" lastFinishedPulling="2026-02-18 18:09:13.161816581 +0000 UTC m=+5973.427427490" observedRunningTime="2026-02-18 18:09:13.806647311 +0000 UTC m=+5974.072258220" watchObservedRunningTime="2026-02-18 18:09:13.812042424 +0000 UTC m=+5974.077653343" Feb 18 18:09:15 crc kubenswrapper[4812]: I0218 18:09:15.508516 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:09:15 crc kubenswrapper[4812]: E0218 18:09:15.509006 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:09:18 crc kubenswrapper[4812]: I0218 18:09:17.999630 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:18 crc kubenswrapper[4812]: I0218 18:09:18.000368 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:18 crc kubenswrapper[4812]: I0218 18:09:18.076473 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:18 crc kubenswrapper[4812]: I0218 18:09:18.905256 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:18 crc kubenswrapper[4812]: I0218 18:09:18.966661 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:20 crc kubenswrapper[4812]: I0218 18:09:20.865237 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbsh8" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="registry-server" containerID="cri-o://b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395" gracePeriod=2 Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.348387 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.358748 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kphbf\" (UniqueName: \"kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf\") pod \"20914350-c50c-404e-b69a-8bebdc44118c\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.358816 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities\") pod \"20914350-c50c-404e-b69a-8bebdc44118c\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.358976 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content\") pod \"20914350-c50c-404e-b69a-8bebdc44118c\" (UID: \"20914350-c50c-404e-b69a-8bebdc44118c\") " Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.369175 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities" (OuterVolumeSpecName: "utilities") pod "20914350-c50c-404e-b69a-8bebdc44118c" (UID: "20914350-c50c-404e-b69a-8bebdc44118c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.372419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf" (OuterVolumeSpecName: "kube-api-access-kphbf") pod "20914350-c50c-404e-b69a-8bebdc44118c" (UID: "20914350-c50c-404e-b69a-8bebdc44118c"). InnerVolumeSpecName "kube-api-access-kphbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.461411 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kphbf\" (UniqueName: \"kubernetes.io/projected/20914350-c50c-404e-b69a-8bebdc44118c-kube-api-access-kphbf\") on node \"crc\" DevicePath \"\"" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.461442 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.539519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20914350-c50c-404e-b69a-8bebdc44118c" (UID: "20914350-c50c-404e-b69a-8bebdc44118c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.563680 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20914350-c50c-404e-b69a-8bebdc44118c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.878939 4812 generic.go:334] "Generic (PLEG): container finished" podID="20914350-c50c-404e-b69a-8bebdc44118c" containerID="b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395" exitCode=0 Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.878983 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerDied","Data":"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395"} Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.879012 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbsh8" event={"ID":"20914350-c50c-404e-b69a-8bebdc44118c","Type":"ContainerDied","Data":"4adb38251fa9e041882adf1de196809f8eef57d27ca1e04aa7a130cd31c3c266"} Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.879029 4812 scope.go:117] "RemoveContainer" containerID="b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.879039 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbsh8" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.914565 4812 scope.go:117] "RemoveContainer" containerID="b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80" Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.928171 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.950877 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbsh8"] Feb 18 18:09:21 crc kubenswrapper[4812]: I0218 18:09:21.960100 4812 scope.go:117] "RemoveContainer" containerID="f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.005889 4812 scope.go:117] "RemoveContainer" containerID="b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395" Feb 18 18:09:22 crc kubenswrapper[4812]: E0218 18:09:22.006389 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395\": container with ID starting with b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395 not found: ID does not exist" containerID="b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.006478 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395"} err="failed to get container status \"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395\": rpc error: code = NotFound desc = could not find container \"b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395\": container with ID starting with b86dd74ff69425e059e661ec06f8d814d402412197ea8583cb6b0dfc67d19395 not found: ID does not exist" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.006500 4812 scope.go:117] "RemoveContainer" containerID="b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80" Feb 18 18:09:22 crc kubenswrapper[4812]: E0218 18:09:22.006795 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80\": container with ID starting with b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80 not found: ID does not exist" containerID="b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.006816 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80"} err="failed to get container status \"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80\": rpc error: code = NotFound desc = could not find container \"b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80\": container with ID starting with b3533d00754062f39681f45393527d694ff593980f2290ed7907af15e996ac80 not found: ID does not exist" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.006828 4812 scope.go:117] "RemoveContainer" containerID="f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4" Feb 18 18:09:22 crc kubenswrapper[4812]: E0218 18:09:22.007258 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4\": container with ID starting with f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4 not found: ID does not exist" containerID="f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.007285 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4"} err="failed to get container status \"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4\": rpc error: code = NotFound desc = could not find container \"f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4\": container with ID starting with f241cc03f2776d687c6c8853d2b81f5f57c5988d4d116372ecb7442b363986e4 not found: ID does not exist" Feb 18 18:09:22 crc kubenswrapper[4812]: I0218 18:09:22.524757 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20914350-c50c-404e-b69a-8bebdc44118c" path="/var/lib/kubelet/pods/20914350-c50c-404e-b69a-8bebdc44118c/volumes" Feb 18 18:09:28 crc kubenswrapper[4812]: I0218 18:09:28.511325 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:09:28 crc kubenswrapper[4812]: E0218 18:09:28.512522 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hhkxg_openshift-machine-config-operator(4bc4da39-1fda-4604-a089-b90b684c8a46)\"" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" podUID="4bc4da39-1fda-4604-a089-b90b684c8a46" Feb 18 18:09:43 crc kubenswrapper[4812]: I0218 18:09:43.508942 4812 scope.go:117] "RemoveContainer" containerID="f7401244b31f6aad34b3103ddecda28df49b895a68d3289c39dec40cb0402896" Feb 18 18:09:44 crc kubenswrapper[4812]: I0218 18:09:44.166712 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hhkxg" event={"ID":"4bc4da39-1fda-4604-a089-b90b684c8a46","Type":"ContainerStarted","Data":"0cbf3aace64dc4b87be052b932bd16e2358e7b27daa07244a8c3acb9196de43e"} Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.662979 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d8gsg"] Feb 18 18:10:06 crc kubenswrapper[4812]: E0218 18:10:06.663824 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="extract-content" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.663837 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="extract-content" Feb 18 18:10:06 crc kubenswrapper[4812]: E0218 18:10:06.663859 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="registry-server" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.663866 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="registry-server" Feb 18 18:10:06 crc kubenswrapper[4812]: E0218 18:10:06.663892 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="extract-utilities" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.663899 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="extract-utilities" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.664167 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="20914350-c50c-404e-b69a-8bebdc44118c" containerName="registry-server" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.665976 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.677588 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8gsg"] Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.848774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-catalog-content\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.848813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-utilities\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.848953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c9lf\" (UniqueName: \"kubernetes.io/projected/a0044ca6-9544-40be-aa71-43471aac420d-kube-api-access-2c9lf\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.950347 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c9lf\" (UniqueName: \"kubernetes.io/projected/a0044ca6-9544-40be-aa71-43471aac420d-kube-api-access-2c9lf\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.950447 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-catalog-content\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.950483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-utilities\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.951123 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-utilities\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.951471 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0044ca6-9544-40be-aa71-43471aac420d-catalog-content\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:06 crc kubenswrapper[4812]: I0218 18:10:06.969672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c9lf\" (UniqueName: \"kubernetes.io/projected/a0044ca6-9544-40be-aa71-43471aac420d-kube-api-access-2c9lf\") pod \"redhat-operators-d8gsg\" (UID: \"a0044ca6-9544-40be-aa71-43471aac420d\") " pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:07 crc kubenswrapper[4812]: I0218 18:10:07.008294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8gsg" Feb 18 18:10:07 crc kubenswrapper[4812]: I0218 18:10:07.478599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8gsg"] Feb 18 18:10:08 crc kubenswrapper[4812]: I0218 18:10:08.440752 4812 generic.go:334] "Generic (PLEG): container finished" podID="a0044ca6-9544-40be-aa71-43471aac420d" containerID="8d207c0e82c0911b752e9de0327a1bf33c8261d917c23542500e2c1be0c35191" exitCode=0 Feb 18 18:10:08 crc kubenswrapper[4812]: I0218 18:10:08.440798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8gsg" event={"ID":"a0044ca6-9544-40be-aa71-43471aac420d","Type":"ContainerDied","Data":"8d207c0e82c0911b752e9de0327a1bf33c8261d917c23542500e2c1be0c35191"} Feb 18 18:10:08 crc kubenswrapper[4812]: I0218 18:10:08.440827 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8gsg" event={"ID":"a0044ca6-9544-40be-aa71-43471aac420d","Type":"ContainerStarted","Data":"766270d72bb8754a78688c1fa52fd3d4ee77e096bfc3bd6b5b34580eb292c3f7"} Feb 18 18:10:10 crc kubenswrapper[4812]: I0218 18:10:10.463634 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8gsg" event={"ID":"a0044ca6-9544-40be-aa71-43471aac420d","Type":"ContainerStarted","Data":"05e40c0096bdc88496335ef901fc5bf27f64f9f9311dbd0eff922a488a2c0a2f"}